Log in to star items.
- Convenors:
-
Anna Schjøtt
(University of Amsterdam)
Nanna Thylstrup
Louis Ravn (University of Copenhagen)
Send message to Convenors
- Chair:
-
Tobias Blanke
(Kings College London)
- Discussant:
-
Dieuwertje Luitse
(University of Amsterdam)
- Format:
- Traditional Open Panel
Short Abstract
The growing ubiquity of AI in society is currently being met with the scholarly formation of Critical AI Studies. However, what is the “critical” and the “AI” of Critical AI Studies? This panel invites contributors to explore these questions empirically, theoretically, reflectively and artistically.
Description
The ongoing hype around Artificial Intelligence (AI), often driven by the tech industry, continues to produce enchanted stories about AI's capabilities and its societal relevance (Campolo & Crawford, 2021). Consequently, formulating generative critiques that can both demystify such accounts and challenge current approaches to AI is increasingly needed. Critical AI Studies is an interdisciplinary ‘field in formation’ (Raley & Rhee, 2023), which offers a response to this need by aiming to better understand, critique and provide alternatives to the current regimes of AI development and implementation – from dataset production to model development, evaluation and deployment, as well as social, political and institutional contexts that shape them. Recent years have seen a variety of approaches to critically examining AI, such as ethnographic ‘lab studies’, reverse engineering or red teaming, technography, controversy mappings, artistic research, critical readings of computer science papers and historical genealogies of particular techniques used in AI. Yet, as an emerging field, Critical AI Studies has already been criticised for failing to question the ‘thingness’ of AI (Suchman, 2023), and there have been calls for more ‘critical’ methods (Offert & Dhaliwal, 2025). So, how do we ensure that we do not reproduce uncontroversial accounts of AI (Suchman, 2023) and that our critique does not run out of theoretical-methodological steam (Latour, 2004)? In this panel, we wish to pursue these questions and invite contributors to submit critical empirical studies of AI grounded in both established and more experimental/innovative methodologies, conceptual frameworks, and artistic practices, along with reflection pieces that dissect what we understand by ‘criticality’ and ‘AI’ in these efforts. STS scholarship is particularly well-placed to participate in such critical endeavours not only through in-depth empirical investigations of AI as a scientific field and situated practice, but also by activating these insights through acts of intervention (Downey & Zuiderent-Jerak, 2021).
Accepted papers
Session 1Paper short abstract
The paper analyses the efforts of the Coalition for Content Provenance and Authenticity (a initiative for provenance and authenticity standards), to identify a distinct mode of infrastructural governance in the context of generative AI, enabled by big tech’s “convening power”.
Paper long abstract
Provenance is commonly approached as a technical matter of traceability, documentation, or metadata (Borgman; Kale et al., 2023), through which origin, authorship, and transformation can be rendered stable and intelligible. Contemporary machine learning systems complicate this assumption. Outputs emerge through layered processes of training, optimization, and probabilistic generation whose relations to prior materials cannot be fully captured by established distinctions between copying, derivation, or independent creation (Amoore, 2020; Guzman & Lewis, 2024; Usher, 2025). Rather than treating provenance as something “to be found”, or its stabilization in AI environments as a technical puzzle “to be solved”, this paper examines provenance as a site of sociotechnical construction and contestation. Paying empirical attention to the efforts of the Coalition for Content Provenance and Authenticity, or C2PA, (a global initiative for provenance and authenticity standards), we identify a distinct mode of infrastructural governance enabled by big tech’s “convening power” (Van Der Vlist, Helmond, & Ferrari, 2024). Specifically, we demonstrate how AI actors organize provenance, who assembles to stabilize it, and what kinds of power operate through this assembly. In doing so, we show how provenance and authenticity standards are leveraged to both allow big tech to demonstrate commitments to “responsible AI” to preempt regulatory intervention and to build an infrastructure through which their preexisting dominance is reinforced. Hence, the paper addresses concerns at the core of critical AI scholarship: How is power infrastructurally embedded and exercised in AI contexts? And, more specifically, how do sociotechnological, economic, and legal arrangements reinforce power asymmetry?
Paper short abstract
This paper provides empirical findings from a study on autonomous drone delivery in Australia. It uses the frame of the "testbed" to understand how and why Australia is used as a testing ground for AI systems, connecting these trials with the history of science and technology testing in the colony
Paper long abstract
Keywords: AI nationalism, testbeds, Australia, drone delivery, race
This paper examines the phenomenon of the AI testbed and practices of testing-in-the-wild. It combines historical and sociological approaches to understand how the settler-colony of Australia has come to be treated as an ideal test site, using commercial drone delivery company Wing Aviation as a case study. It connects the figuration of Australia as contemporary testbed with histories of the nation as a colonial experiment. I argue that this historical frame has been consistently deployed to justify the treatment of lands and peoples as experimental subjects across a range of domains—from medical science, penal management, and military operations. In doing so, I show how Australia has been treated as a test site and Australians as test subjects based on changing imaginaries of the nation and its people—from proxies for whiteness and Empire in the colonial period, to multiculturalism and ethnic diversity in the contemporary era.
Paper short abstract
This paper examines how medical AI models are developed across publications, competitions, and commercial products in China’s AI industry. Based on ethnography in two startups and interviews with AI engineers, it shows how different institutional settings produce “the model multiple.”
Paper long abstract
This paper examines how machine learning (ML) models for medical image analysis are developed across different institutional settings in China’s rapidly growing medical AI industry. Drawing on ethnographic fieldwork at two Chinese medical AI startups, interviews with ML engineers and researchers, and analysis of published research papers, the paper situates everyday coding work within the infrastructures and organizational logics shaping contemporary AI development. I compare three settings in which ML models are developed and evaluated: academic publications, competitions, and commercial products. Each mobilizes distinct datasets, evaluation metrics, and development priorities. Academic publications rely on curated datasets and mainstream benchmarks to produce novel and publishable results. Competitions reward performance on highly controlled evaluation tasks and serve as arenas for demonstrating technical prowess. Commercial products, in contrast, require engineers to work with heterogeneous clinical data, integrate models into existing medical infrastructures, balance performance with hardware constraints, and navigate regulatory and market pressures. These differences give rise to what I call the model multiple: the same model architecture is enacted differently depending on whether it is built for papers, prizes, or products. While companies prominently showcase success in publications and competitions to signal innovation, everyday programming work is largely oriented toward product development. By tracing how models move across these sites, the paper shows how AI models are shaped by the infrastructures, institutional expectations, and data practices that sustain them.
Paper short abstract
Drawing from empirical research on changing AI governance in the US in 2025, this presentation proposes a modality of critical AI studies that attends symmetrically to the coproduction of AI and democracy, by treating both technoscience and democracy as contingent and constructed sites of inquiry.
Paper long abstract
The return of Trump to the presidency in January 2025 brought a pronounced shift in U.S. AI policy, discourse, and corporate strategy. Executive orders dismantled Biden-era AI safety frameworks, mass firings hollowed out public expertise and institutions, and the administration reframed its posture — in JD Vance's terms — from "AI safety" to "AI opportunity." These moves were presented not as deregulation but as liberation: the triumph of democracy over "woke" elite ideology. The concomitant intensification of entanglements between the tech sector and the US government has been met with alarm from civil society and STS scholars, with warnings of technofascism and tech oligarchy.
This paper takes this political moment (empirically grounded in interviews, executive orders, public discourse, and legal texts) as an entry point for reflecting on what "critical" means in Critical AI Studies. Despite the shocking nature of the sharp turn in U.S. science policy, the essential groundwork for the emergent oligarchic relationships and abusive applications of AI was laid long before 2025. Attending to continuity and disjuncture in the construction of both democracy and technological development is key for understanding how science and technology policy is changing in the U.S. and globally, and how AI development and democratic politics are coproduced.
A Critical AI studies agenda might accordingly attend symmetrically to the coproduction of AI with democracy and include interrogations of how more interventionist STS approaches can help to propagate novel and robust democratic mechanisms that extend beyond the “lab” (or the startup).
Paper short abstract
By treating technical architecture itself as the primary site of critique, this paper challenges the "technological neutrality" assumption in autonomous driving, demonstrating how capability boundaries structurally encode inequality across perception, decision-making, and systems layers.
Paper long abstract
Critiques of AI often target deployment practices, dataset biases, or governance failures, leaving the technical architecture itself unquestioned. This paper argues that such approaches risk reproducing "uncontroversial accounts of AI" (Suchman, 2023), by accepting the architecture as a given and debating only its uses. We propose instead to treat technical architecture as the primary site of critical inquiry, following Winner's insight that artifacts have politics.
Taking autonomous driving as a case study and combining close reading of engineering literature with STS-informed analysis, we examine how the "technological neutrality" assumption fails at three architectural layers. At the perception layer, dynamic range ceilings and feature extraction bottlenecks produce irreducible detection disparities disproportionately affecting darker-skinned pedestrians, revealing that capability boundaries are simultaneously equity boundaries. At the decision-making layer, end-to-end neural architectures dissolve accountability chains, whereby explainability techniques generate post-hoc rationalizations rather than causal transparency, institutionalizing historical bias while rendering correction structurally impossible. At the systems layer, the shift from HD maps to world models replaces visible geographic exclusion with covert temporal exclusion, redistributing risk along socioeconomic fault lines the dominant SAE taxonomy renders invisible.
This paper operates from within the technical rather than imposing ethical frameworks from outside. By engaging directly with engineering literature and system design logic, we show that architectural non-neutrality is an empirical finding, not a philosophical conjecture. Current autonomous driving systems have never incorporated equity as a foundational design constraint, and meaningful critique must engage with architecture as such rather than limiting itself to configuration or deployment.
Paper short abstract
This paper examines Hugging Face's evolution to show that as AI development becomes increasingly industrialised, infrastructural intermediaries play an increasingly crucial role in shaping the governance, economics, and politics of AI.
Paper long abstract
This paper examines Hugging Face’s (HF) evolution from a machine-learning library into an infrastructural intermediary within the contemporary AI ecosystem. Originally launched as a chatbot application and natural language processing library, it now operates as a "hub": a repository, infrastructure provider, standard-setter, marketplace operator, and participant in AI governance and ethics. Rather than presenting this trajectory as a linear story of growth, the paper interrogates the political-economic forces that have shaped HF’s development and the tensions that emerge from its dual positioning as both an advocate of “open and ethical AI” and a commercially embedded platform.
Employing a mixed-methods approach rooted in critical data and AI studies, the research analyses a longitudinal corpus of HF’s archival webpages, terms of service and pricing (2016–2026), alongside a systematic review of its organisational partnerships and infrastructural integrations with cloud hyperscalers and hardware manufacturers. This approach enables a decade-long analysis of the platform’s shifting roles and ecosystem position.
We argue that in this trajectory, the model functions as a central organising unit through which AI is built, circulated, and governed, and this centrality is inseparable from an ongoing assetization (Birch & Muniesa 2020) of machine learning models. The case of Hugging Face thus also shows how as AI development becomes increasingly industrialised, infrastructural intermediaries play an increasingly crucial role in shaping the governance, economics, and politics of AI.
Paper short abstract
By advancing an infrastructural approach to Critical AI studies, we seek to unravel and challenge the ‘thingness’ of AI. Offering a critical empirical intervention, we analyse the infrastructural dependencies and market structures shaping public AI projects in Denmark.
Paper long abstract
Responding to calls within Critical AI Studies to move beyond the ‘thingness’ of AI, this paper develops an infrastructural and empirical approach to public-sector AI. Rather than treating AI systems as discrete technological artefacts, we examine the layered infrastructural dependencies and market structures through which they are assembled, maintained, and governed. Public AI projects across the EU are increasingly promoted as strengthening national IT sectors and envisioned as ways to uphold digital sovereignty (Coletti et al., 2025). Yet such claims remain undertheorized and empirically underexplored.
Drawing on a dataset of 229 AI projects rolled out across the public sector in Denmark, we develop a methodological framework for identifying key market actors supplying public AI infrastructures. First, we analyse procurement documents to map official suppliers, revealing a complex landscape of domestic IT companies and public–private partnerships (Laage-Thomsen et al., 2025). We then zoom in on the technological design of these AI systems, their cloud infrastructures, model architectures, software libraries, APIs, and more, to show how global platform corporations persist as de facto infrastructural sub-contractors for the public sector (Luitse, 2024).
This approach allows us to cut across local “small tech” suppliers assembling and maintaining particular services, and global “big tech” corporations supplying the underlying infrastructural building blocks, foregrounding embedded power asymmetries. By situating local AI systems within their wider global ecosystem of infrastructural dependencies, the paper offers a critical empirical intervention to ongoing debates about digital sovereignty and a methodological pathway for studying AI as a socio-technical and political assemblage.
Paper short abstract
This paper focuses on what ought to be the most transparent object in the OpenAI actor-network: the documentation describing its models. With evidence from an empirical study of GPT models, I find the documentation to be inaccurate, rhetorically opaque, and otherwise part of the 'AI black box'.
Paper long abstract
Debates about AI’s capabilities have persisted since at least the 1960s (Dreyfus 1967, 1992; Searle, 1980). Today, these discussions sometimes iterate as abstract critiques that have themselves been criticised as ‘toothless’ (Munn, 2023). This paper takes up the call to ground critiques of AI developers and their products in empirical evidence.
OpenAI, perhaps the most well-known AI developer, has a name that insinuates transparency. Yet its systems operate largely as—and produce—black boxes (Bunge, 1963). This paper focuses on what ought to be the most transparent object in the OpenAI actor-network: the documentation describing its models, how they were built, and how they are meant to work.
Through an empirical study working with GPT models and systematically adjusting prompts, temperature, and other parameters, I test the veracity of claims made in OpenAI’s documentation. I find the documentation to be sparse, inaccurate, rhetorically opaque, and otherwise consistent with the oversell-overpromise characteristics of the company’s other productions, such as its advertisements. Furthermore, I find the production of the documentation itself is opaque; for instance, details about who wrote it (and under what labour conditions) remain largely invisible.
In this sense, documentation is not simply a technical guide but a rhetorical artefact that stabilises particular understandings of what AI is and how it works. Treating documentation as a discursive formation (Foucault 1969), the paper foregrounds the heterogeneous networks of labour, infrastructure and user interaction that sustain contemporary AI systems.
Paper short abstract
This contribution discusses the development of participatory methodologies inspired by procedures of "red teaming" that aim to identify, examine, and rethink how “the environment” and its multiple crises are configured within generative AI (GenAI) systems.
Paper long abstract
This contribution discusses ongoing research aimed at identifying, examining, and rethinking how “the environment” is configured within generative AI (GenAI) systems. While the proliferation of commercial, GenAI applications and the hyperscaling of data centers has brought the environmental harms of these technologies into sharper focus, most notably their unsustainable resource use and energy demands as well as problems with green washing, the direct environmental impacts of the material infrastructure are not the only way in which the environment and GenAI interrelate. Although indirect environmental impacts and underlying values and assumptions are harder to identify, they remain highly influential and should be made visible. Drawing inspiration from the method of ‘red teaming’ AI, we suggest ‘green teaming’ as a distinct approach to provide an initial step towards mapping the diverse ways in which ‘the environment’ is constituted in GenAI, including how it is overlooked.
Red teaming is typically conducted in technology companies to identify unintended, unsafe, and harmful outcomes of AI models. Recently, civil society and public sector organisations have begun to adopt red teaming in 'the public interest' or for 'social good'. In the context of GenAI, this often means creating prompts to evaluate if outputs (images, text, or other modality) are suitable for a pre-defined use case, if they adhere to social norms or (unintentionally) reinforce harmful stereotypes. Usually, this involves collaborative exercises directed by different forms of expertise. This contribution outlines "green teaming" as a participatory methodology and outlines first insights from its application.
Paper short abstract
This paper argues that archival theory offers critical AI studies a generative theoretical resource. Drawing on our experience editing a SI on AI and Archives, we argue that archival theories provide meaningful frameworks for exploring how AI reconfigures the politics of knowledge infrastructures.
Paper long abstract
Recent debates in critical AI research have foregrounded questions of data governance, documentation, classification, and accountability that archival scholars will immediately recognize. Yet despite this convergence, archival theory remains underutilized as a conceptual resource in critical AI studies. This paper argues that archival theory offers not merely resonant vocabulary but a distinct analytical tradition shaped by the institutional, ethical, and political dimensions of many of the problems now foregrounding in AI governance debates. Drawing on our experience editing a special issue on AI and Archives, we trace what we describe as an emerging archival turn in critical AI research. Archival concepts including provenance, appraisal, custody, and unlearning have re-emerged as central sites of ethical and political concern in AI, while critiques of benchmarks and taxonomies echo archival scholarship on how classificatory infrastructures sediment racialized, gendered, and colonial power. We argue that this convergence is symptomatic of shared concerns about who controls the cultural record in the age of AI, on what authority, and to what ends. Rather than treating archives merely as data sources for AI systems, we show how archival theory, in particular its critical and reparative traditions, provides frameworks for interrogating how AI reconfigures memory, authority, and the politics of knowledge production. In doing so, we make a case for archival theory as a generative resource for answering the question this panel poses: what does it mean to be critical about AI?
Paper short abstract
Based on ethnographic fieldwork, we develop “decentering AI” as a methodological strategy by highlighting the relations and worlds enacted through the production of AI, and those that produce AI. Decentering is a critical pathway to foreground the social, ecological and more-than-now in studying AI.
Paper long abstract
Contemporary studies of AI run up against its elusive character despite significant industry promises and optimism. The ontological boundaries of ‘AI proper’ are fuzzy, and the production and maintenance of these boundaries are protected by corporate interests. Consequently, critical AI research assuming to untangle these fuzzy boundaries may unwillingly contribute to the performance of AI as a coherent object and solution to myriad social and ecological problems (Suchman, 2023).
We suggest a research strategy of ‘decentering AI’ as an approach for Critical AI Studies. We build on work regarding methodological decentering to foreground causes of systemic discrimination instead of tweaking parameters (Gangadharan & Niklas, 2019); to attune to non-human relations and technological ‘un-making’ (Nicenboim et al., 2024); and to ‘study around’ multiple enacted objects (M’charek, 2000). We take seriously an ethnographic fieldwork observation in the Feminist Generative AI Lab: AI as an object of study easily disappears from view. For instance, when studying data work behind AI models pre-existing systems of labour exploitation take centre stage. Similarly, in research on the pollution of AI, decentering AI means recentering the toxic chemical relations between industries, ecologies and more-than-human lives.
Thus, decentering strategies do not ‘unmask’ AI but develop it as an object of methodological care. This contribution highlights how relations and worlds enacted through the production of AI come into view, alongside those that produce AI. This is one example of the critical in critical AI studies: decentering AI and foregrounding the social, ecological, and more-than-now.
Paper short abstract
Following Serres (1980), we conceptualize the relationship between GenAI and a Q&A platform Stack Exchange (SE) as parasitic, with GenAI acting as an extractor and disruptor and SE serving as the host. Our lens focuses on data dump, a core sociotechnical artifact serving as the extractive resource.
Paper long abstract
Following Serres (1980), we conceptualize the relationship between GenAI and a Q&A platform Stack Exchange (SE) as parasitic, with GenAI acting both as an extractor and disruptor and SE serving as the host. Founded in 2009, SE operated without major disruptions until the launch of modern GenAI tools, after which major perturbations emerged.
Epistemologically, we enter the analysis through the data dump, a core sociotechnical artifact archiving all the contents of the SE platform. We take a computational-qualitative approach to trace the data dump on discussions between SE community members, moderators, and company representatives held on dedicated meta forums and available as part of the data dump. Moreover, we draw from developer platforms GitHub and HuggingFace to identify the affordances and qualities that the developers perceive in the data dump.
Our analysis identifies and conceptualizes three movements of the parasite framework in the platform capitalism context in the GenAI era. The first is extraction (or grabbing, or capture): for us, the fact that the data dump is used without visible reciprocity for the host (no return contributions, institutionalized acknowledgement, or value-sharing arrangements). The second is the production of noise (or interference), which is central in Serres' thinking: the emergence of controversies on SE, governance frictions, and a loss of trust, including credibility disputes, suspicion, and tightening of policies. The third is the reconfiguration of flows: the displacement of attention and value toward AI systems and external infrastructures, observable through declines in visits/contributions and shifts in usage trajectories.
Paper short abstract
Drawing on ethnographic fieldwork at the Center for Artificial Intelligence in São Paulo, I propose a methodological intervention that takes indigenous epistemologies seriously. I explore how approaching AI through relational cosmologies can reconfigure what critical engagement with AI means.
Paper long abstract
In my contribution, I want to draw on ethnographic fieldwork conducted at the Center for Artificial Intelligence (C4AI) in São Paulo to propose a methodological and theoretical intervention within Critical AI Studies. While much critical work on AI focuses on demystifying technical systems or exposing regimes of extraction and governance, I argue that criticality must also take seriously epistemologies from the Souths as sources of conceptual reorientation. Based on interviews and observations among AI researchers in Brazil, I explore how AI is encountered not as a stable technological object, but as something that must be situated within broader ontologies of life, responsibility, and futurity.
Building on Brazilian indigenous thought, particularly the work of Ailton Krenak, I want to experiment with approaching AI through relational cosmologies that do not sharply separate human, non-human, and technological actors. Such perspectives resonate with posthuman and new materialist paradigms yet emerge from distinct historical and ethical trajectories shaped by colonial violence and environmental devastation. I aim to explore what happens when AI is conceptually inserted into indigenous frameworks of relationality and care. This move could simultaneously destabilize dominant imaginaries of AI and reconfigure what it means to engage critically with it. The contribution thus advances an ethnographically grounded attempt to rethink AI Studies from Brazil by testing how indigenous epistemologies can transform the very terms of critique.