Log in to star items.
- Convenors:
-
Laura Kunz
(University of Graz)
Jana Heim (Weizenbaum Institute Berlin, WZB Berlin Social Science Center)
Daniel Schneiß (Weizenbaum Institute, Berlin Kiel University)
Send message to Convenors
- Format:
- Traditional Open Panel
Short Abstract
Recognising that AI comes with its own particular future-oriented epistemology, this panel explores how the epistemology of AI is changing work and workers’ agency, and asks how work can be studied in STS and collaboratively ‘futured’ in an AI-driven world.
Description
‘Nothing comes without its world’ (de la Bellacasa, 2012) — neither does AI. While STS has by now acknowledged the need to interrogate the work that maintains digitalisation, datafication and functionalities of AI (Sambasivan et al., 2021), which is often made invisible, we have not yet asked how the epistemological framework that AI comes with is changing how labour is approached. Acknowledging that AI is a non-neutral technology impacting the future of work requires us to take this epistemological framework seriously, i.e., how it promotes a view on the world that causes a narrowing down of others. After all, AI systems establish a ‘world model’ (Amoore, 2024), follow productivity and efficiency criteria, and often boil down to single-parameter optimisation and prediction approaches focusing on monetarily measured efficacy (Kasy, in press; McQuillan, 2022).
With this panel, we thus want to interrogate how the implementation of AI changes the approach to work practices, work organisation and labour relations when guided by these epistemologies. This also includes examining the work of integrating, infrastructuring, adapting, repairing, and evaluating AI systems.
We welcome contributions, both empirical and conceptual, that look into:
- How (the epistemology of) AI is changing work processes, work organisation, labour relations, and the epistemic agency of workers?
- What kinds of reconfigurations, frictions, and compromises between professional values, organisational logics, and epistemic logics of AI arise?
- What constitutes meaningful futures of working with(out) AI and how can they be collaboratively futured?
We are also interested in the tools and perspectives offered by STS to interrogate work as a sociotechnical phenomenon that co-shapes society and the economy in the future, foregrounding work as a research area in STS.
Accepted papers
Session 1Paper short abstract
We explore the ways in which working processes of case officers are reshaped by the introduction of AI decision support tools. We lay out several rationales the case officers have to adhere to, and we argue that these competing rationales create a very narrow position for case officers to inhabit.
Paper long abstract
In this paper, we discuss the role of case officers in contexts of AI decision support. In sensitive contexts, e.g., to allocate welfare resources, fully AI-based decisions are usually prohibited: Due to the General Data Protection Regulation in the EU, a decision based on sensitive personal data of humans cannot be fully automated, and due to the new EU AI Act, there has to be a meaningful human oversight. This creates challenges for those within institutions who are tasked with making the “final decision”, i.e., the “humans in the loop”.
We explore the ways in which working processes of case officers are reshaped by the introduction of AI decision support tools. We lay out several, often competing, strands of rationales the case officers have to adhere to: There is the efficiency rationale—as data-driven tools are supposed to support case officers with their case load—and its accompanying expectations of increase in case numbers; there are legal regulations that require meaningful agency in all decisions, as well as the need for individual “AI literacy” in order to even understand how an AI-proposed decision came to be; there are organizational rules in how deviating from, or adhering to, an AI-proposed decision has to be justified by the case officer. Drawing from case studies, we argue that these competing rationales create a very narrow and, often, almost impossible position for case officers to inhabit.
Paper short abstract
This paper examines how generative AI reshapes musical work by reconfiguring musicians’ agency, platform governance, cultural values, and copyright. It shows how AI’s epistemologies reorganise creativity, legitimacy, and the future of work in music.
Paper long abstract
This paper examines how generative AI is reconfiguring the world of music by transforming not only creative practices but also the epistemic conditions under which musical work is organised, valued, and governed. Drawing on an STS framework and an empirically grounded case study of AI in music between 2023 and 2026, the paper approaches music as a sociotechnical field in which musicians, platforms, markets, legal regimes, audiences, and cultural values are increasingly reorganised through the implementation of AI systems.
Rather than treating AI merely as a tool, the paper analyses it as a future-oriented epistemic framework that promotes particular models of music-making, circulation, and ownership. In the music sector, these epistemologies reshape work processes by privileging forms of prompting, selection, monitoring, and adaptation, while redistributing agency across musicians, new platform infrastructures, datasets, interfaces, and AI providers. This transformation affects the evaluative and organisational work required to integrate, repair, regulate, and make sense of AI in musical activities.
The paper focuses on four dimensions: the transformation of musicians’ work and professional agency; the reconfiguration of cultural values such as authenticity, authorship, and creativity; the growing role of digital platforms in governing visibility, legitimacy, and monetisation; and the copyright controversies surrounding training data, synthetic voices, and ownership. Empirically, it combines document analysis and multi-site digital ethnography. Overall, it argues that meaningful futures of musical work with AI are not determined by the technical features of AI but will emerge as outcomes collectively negotiated through frictions, controversies, and sociotechnical reassembling processes.
Paper short abstract
This paper explores how AI tools are incorporated into judicial institutions handling armed conflict claims, and how their integration alters the conditions under which professional judgment is exercised and the forms of action considered available or justified in the administration of justice.
Paper long abstract
Colombian judicial institutions that deal with claims brought by victims of the armed conflict operate within a context of prolonged internal violence and sustained struggles over how justice should be defined and delivered in its aftermath. Over the past decades, these institutions have addressed questions of accountability, recognition, and reparation, responding to struggles related to victims’ rights, the burden of proof in contexts of internal armed conflict, and the institutionalization of transitional justice mechanisms. This presentation explores the integration of AI tools into judicial systems within this setting.
Drawing on ongoing ethnographic research in Colombia and a review of reported AI use in judicial institutions, the presentation examines how AI tools are incorporated into everyday judicial work. These systems prioritise claims, identify patterns in past records, estimate probabilities, and process large volumes of judicial text. They are built through decisions about what data to include, how to define relevant variables, and how to compare information across cases. Legal professionals must determine how to work with these outputs and how to situate them within existing standards of accepting evidence and proving responsibility.
As these tools become embedded in institutional routines, the conditions under which professional judicial judgment is exercised are altered. In institutions responsible for processing claims arising from armed conflict, where determinations of harm and recognition are significant, the presentation considers how the integration of AI redefines what becomes relevant within judicial processes, and how this influences which forms of action are available or justified in the administration of justice.
Paper short abstract
This paper introduces the concept of adequacy regimes to describe the socio-technical arrangements through which actors (or engineering more specifically) decide when knowledge or system outputs are good enough to proceed with work.
Paper long abstract
Artificial intelligence systems are increasingly integrated into everyday work practices, yet their outputs remain uncertain, strange, just weird, or often difficult to interpret. As a result, workers must continuously determine when AI-generated results are sufficiently reliable to incorporate into their tasks. This paper introduces the concept of adequacy regimes to describe the socio-technical arrangements through which actors decide when knowledge or system outputs are good enough to proceed with work.
Drawing on ethnographic research with robotics engineers, software developers, and technical practitioners working with AI systems, the paper examines how workers negotiate the epistemic expectations introduced by contemporary AI tools. Rather than simply automating decision-making, AI systems often shift the burden of judgment onto workers, who must determine when model outputs are acceptable, when further verification is required, and when the system should be ignored or overridden. These judgments are rarely individual decisions; instead they are shaped by organizational constraints, professional norms, engineering practices, and the epistemic logics embedded in AI technologies themselves.
Adequacy regimes emerge at the intersection of these forces, structuring acceptable levels of uncertainty, error, and responsibility in AI-mediated work. Through practices such as iterative testing, prompt experimentation, cross-checking, and collaborative troubleshooting, workers transform uncertain model outputs into actionable knowledge.
By focusing on adequacy regimes, this paper contributes to STS discussions about the epistemology of AI and the future of work. It shows how AI does not simply introduce new tools into workplaces but reshapes how workers evaluate knowledge, distribute responsibility, and define when work is complete.