Log in to star items.
Accepted Paper
Paper short abstract
This paper focuses on what ought to be the most transparent object in the OpenAI actor-network: the documentation describing its models. With evidence from an empirical study of GPT models, I find the documentation to be inaccurate, rhetorically opaque, and otherwise part of the 'AI black box'.
Paper long abstract
Debates about AI’s capabilities have persisted since at least the 1960s (Dreyfus 1967, 1992; Searle, 1980). Today, these discussions sometimes iterate as abstract critiques that have themselves been criticised as ‘toothless’ (Munn, 2023). This paper takes up the call to ground critiques of AI developers and their products in empirical evidence.
OpenAI, perhaps the most well-known AI developer, has a name that insinuates transparency. Yet its systems operate largely as—and produce—black boxes (Bunge, 1963). This paper focuses on what ought to be the most transparent object in the OpenAI actor-network: the documentation describing its models, how they were built, and how they are meant to work.
Through an empirical study working with GPT models and systematically adjusting prompts, temperature, and other parameters, I test the veracity of claims made in OpenAI’s documentation. I find the documentation to be sparse, inaccurate, rhetorically opaque, and otherwise consistent with the oversell-overpromise characteristics of the company’s other productions, such as its advertisements. Furthermore, I find the production of the documentation itself is opaque; for instance, details about who wrote it (and under what labour conditions) remain largely invisible.
In this sense, documentation is not simply a technical guide but a rhetorical artefact that stabilises particular understandings of what AI is and how it works. Treating documentation as a discursive formation (Foucault 1969), the paper foregrounds the heterogeneous networks of labour, infrastructure and user interaction that sustain contemporary AI systems.
A field in formation: What do we mean by ‘critical’ and ‘AI’ in Critical AI Studies?
Session 2