Log in to star items.
- Convenors:
-
Marek Troszynski
(Civitas University, NASK-PIB)
Jacek Bieliński (Civitas University, NASK-PIB)
Send message to Convenors
- Chairs:
-
Marek Troszynski
(Civitas University, NASK-PIB)
Jacek Bieliński (Civitas University, NASK-PIB)
- Format:
- Traditional Open Panel
Short Abstract
This panel examines how AI and digital infrastructures in public administration materialise sociotechnical imaginaries of efficiency, neutrality, and control, and how such imaginaries reconfigure bureaucratic authority, responsibility, and the moral order of governance.
Description
Artificial intelligence (AI) and a growing ecosystem of digital technologies—ranging from data analytics and algorithmic decision-making tools to automated document workflows and software platforms—are increasingly woven into the infrastructures of public administration. These technologies promise efficiency, transparency, and rational governance. Yet such promises are not merely technical; they belong to broader infrastructural imaginaries that link visions of automation and digitalisation to ideals of control, neutrality, and trust in bureaucratic order.
This panel invites contributions in the sociology of technology and Science and Technology Studies (STS) that explore how AI and software-based infrastructures are co-produced with the institutional logics, moral economies, and epistemic practices of public administration. We seek theoretical and empirical papers examining how these systems are designed, implemented, and resisted within administrative contexts—from experimental pilots and digital twins to large-scale data infrastructures. How do such sociotechnical arrangements redefine what counts as evidence, fairness, or due process? How are administrative values translated into code, and how do civil servants interpret, negotiate, or contest these translations in their everyday work?
Particular attention is given to civil servants as epistemic actors who must reconcile algorithmic scripts and digital protocols with professional judgment and moral accountability. Their engagements with digital systems—as both tools for decision-making and sources of uncertainty—reveal how responsibility, discretion, and risk are redistributed across human–machine assemblages.
The panel also invites reflection on the political and ideological dimensions of digital infrastructures’ supposed neutrality. When automation and software systems are framed as depoliticised solutions, what forms of exclusion, discrimination, or opacity become institutionalised? Conversely, how might alternative infrastructural imaginaries foster transparency, care, and public trust?
By connecting grounded studies of AI and digital infrastructures to broader debates on governance and state transformation, this panel seeks to theorise how contemporary bureaucracies are being recomposed through code—and how sociotechnical imaginaries shape the moral and political futures of administration itself.
Accepted papers
Session 1Paper short abstract
While many public sector organisations adopt AI for various tasks, many public sector employees also adopt generative AI tools in an ad hoc or ‘shadow’ manner to complete their own work. As such, the adoption of GenAI tools within public administrations can bypass the organisation itself.
Paper long abstract
We conducted a large survey of public sector managers in seven EU countries to explore the perspectives and use of AI and GenAI in the context of their own work, and that of their organisation. Most respondents reported that their organisations are already using AI, with still more reporting such initiatives in the planning stages, primarily for service delivery and internal operations. When it came to their own use of AI for their work, a substantial minority of these managers reported using (or planning to use) GenAI tools, often for support with writing tasks. Much of this use constitute an ad hoc or ‘shadow’ adoption, whereby GenAI is used within the organisation at the personal initiative of the employee without being formalised through an organisational policy. This introduces a dynamic whereby the adoption of GenAI tools within public administrations can, to an extent, bypass the organisation itself. Further data collection and analysis will be conducted in 2026 with the exploration of emerging themes continuing in parallel. We propose to present interim findings for discussion at the conference in September.
Paper short abstract
Efforts to automate public administration tasks often create additional work, despite the aim of reducing and liberating human work. We show how public sector professionals assist software robots and introduce the concept of human-assisted automation to make this hidden labour visible.
Paper long abstract
Public administrations are increasingly digitalising their work processes and customer interactions to improve the (cost-)efficiency, reliability and accessibility of their services. This has led them to experiment with and implement automation technologies, such as Robotic Process Automation (RPA) and various AI tools. Although IT consultants advocate RPA robots with the promise that they will liberate human work for more meaningful and demanding tasks by automating simple, repetitive, and rule-based tasks, their implementation often fails. Our study critically examines software robots in action by asking what kinds of additional work they generate when used to reduce routine tasks.
Drawing on ethnographic methods and materials, we analyse two cases of RPA implementation in Finland. The first case focuses on an experiment conducted by the Tax Administration, which aimed to identify measures to improve the organisation’s efficiency and productivity by piloting software robots in three different tasks. The second case explores a software robot implemented by a Wellbeing Services County to automate a small phase of data work in primary healthcare. The organisational goal was to improve data quality while reducing the workload of data workers. Our findings show how tensions emerge between anticipated RPA capabilities and their actual performance, particularly when automation generates additional work rather than reduces it. To make this additional work visible, we suggest a concept of human-assisted automation. Based on the concept, we identify diverse forms of work that emerge as civil servants and other professionals need to assist automation to make these robots function properly.
Paper short abstract
This study examines how the roles of AI technologies are constructed in EU discourse through a narrative lens. More specifically, we aim to analyse the various roles attributed to AI technologies at the EU level and understand whether these can be mapped to “strong” and “weak” AI narratives.
Paper long abstract
This study examines how the roles of AI technologies are constructed in EU discourse through a narrative lens. More specifically, we aim to analyse the various roles attributed to AI technologies at the EU level and understand whether these can be mapped to “strong” and “weak” AI narratives. We borrow the conceptual distinction described by Bory et al. (2025), which differentiates between (super)humanlike (strong) aspirations for AI and narrower, system-/function-focused (weak) accounts. Using methods from computational narrative understanding with a focus on narrative roles, we assemble and investigate a collection of EU policy documents and debates from the past decade – the exact sampling approach and span is tbd, currently resources like the EUR-Lex, OECD AI Policy Observatory, Global News Dataset, or NewsAPI are candidates for data collection. The investigation will involve a quantitative study, supplemented with human validation, and an accompanying qualitative analysis conducted on a subset of our data. Our analysis will build on recent approaches to operationalising character roles in public discourse — e.g., using Greimas’ actantial model (Elfes, 2025) or taxonomy-free labelling (Hobson et al., 2025). In doing so, we aim to identify common tropes underpinning the characterisation, development, and implementation of AI technologies in the EU context and offer a methodological contribution for this line of research.
This proposal is a work-in-progress; during the Coding the State panel we would discuss our preliminary results.
Paper short abstract
Based on ethnographic research in Colombia, the paper explores how data infrastructures reconfigure discretion, responsibility, and authority in welfare administration. These shifts are negotiated through data bureaucrats’ different imaginaries of welfare administration, poverty, and fairness.
Paper long abstract
Colombia’s welfare system relies on algorithmic targeting and scoring of vulnerability. This paper investigates how data infrastructures reconfigure welfare administration by examining the practices and notions of data bureaucrats working with Colombia’s digital welfare infrastructure, SISBÉN.
Insights from ethnographic fieldwork suggest that national officials, who design and manage SISBÉN, prioritize minimizing “manipulation” of the system by local civil servants and citizens. They envision algorithmic classification and interoperable data systems as routes to reliable, neutral information about poverty for fairer social assistance. In contrast, local civil servants, who collect data through home visits and interviews with potential welfare recipients, do not necessarily find the system capable of accurately capturing needs and argue for the relevance of their on-the-ground experiences. Some street-level bureaucrats use their discretion – constrained by opaque and monitoring data systems – to strategically adapt the information they enter and partially circumvent rules in attempts to help people obtain more favorable classifications. In interactions with applicants, they navigate conflicting interests and demands, including maintaining local relationships and meeting job requirements.
The paper argues that national data bureaucrats enact imaginaries of centralized welfare administration, standardized poverty knowledge, and fairness through control. Simultaneously, local data bureaucrats enact imaginaries of local welfare administration, situated, on-the-ground judgments of need, and fairness as relational accountability. I demonstrate how bureaucratic discretion, responsibility for welfare, and authority over welfare are negotiated between data bureaucrats and data infrastructures in reconfigurations of how poverty is and should be known and of who can and should make welfare decisions.
Paper short abstract
This contribution examines how the roles of the government are framed in AI policy (including observer, guarantor, mitigator, facilitator), why certain roles dominate but others are neglected and what it tells about the distribution of power in AI governance.
Paper long abstract
The field of AI is characterized by high concentration of power in a small number of Big Tech companies, while society has relatively little influence. These power asymmetries have considerable impact on AI policy and governance. Against this backdrop, this study examines which roles does the government play in AI policy and governance and if the government is reinforcing or reshaping existing power asymmetries in AI. To do that, this study draws on two strands of literature focusing on the concept of power and the roles of government in socio-technical transformation. First, the power of Big Tech companies in AI policy and governance manifests itself in numerous ways including lobbying, regulatory capture, and funding for academics and non-governmental organisations. This study argues that different types of Big Tech power - economic, political and ideational - reinforce each other and it is important to focus on more subtle and indirect forms of power, such as power in ideas examining how certain pro-tech and tech solutionism framings gain prominence. Second, this study draws on recent work on the roles of government in sociotechnical transformations and emphasizes the shift from approaching roles of government in technology policy from difference in degree perspective (more or less government) to difference in kind (diverse roles of government, including observer, facilitator and lead-user). Empirically, this study examines framing of roles of government in AI policy documents, analyzing why some roles dominate, but others are absent, and what it tells about the distribution of power in AI governance.