Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Marc Böhlen
(University at Buffalo)
Andrew Lison (University at Buffalo, SUNY)
Send message to Convenors
- Format:
- Traditional Open Panel
- Location:
- HG-09A24
- Sessions:
- Tuesday 16 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
This panel seeks to host conversations specifically around the question of contemporary limits to AI. These limits can be imposed or inherent—that is, enforced by human beings by choice or given by the material resources required to produce AI in the first place—or both.
Long Abstract:
Recent discussions of artificial intelligence have largely been driven by a sense of inevitable transformation. Our panel seeks instead to stimulate conversations around the question of contemporary limits to AI.
Limits can be understood in two contradictory senses. The first of these is the notion that external constraints will need to be imposed on a technology that is developing faster than social mores can adjust. A second is becoming apparent in the energy and water resources required to produce state-of-the-art language models; such environmental constraints gesture toward limits of producibility. AI, then, may potentially either need to be held in check (as in the accelerationist case of an out-of-control artificial general intelligence) or present unacknowledged shortcomings of its own (revealed though critique).
Our panel seeks contributions discussing how these limits might manifest themselves in specific situations, and how they might be addressed in practical countermeasures. We encourage submissions resisting industry narratives that marginalize social concerns in favor of unrealistic and/or unethical visions of unlimited technical performance, blind optimization. Similarly, we welcome presentations addressing conflicts between AI development and natural resource availability. Downstream, ecological restrictions are also global limits, as excessive power and cooling costs must be passed on to users while their externalities risk environmental harm. Moreover, if only large corporations and state actors end up possessing the means to support advanced AI production given the energy and mineral costs associated with large-scale computation, what could this mean for the future of technology and democracy?
STS is ideally suited to address this topic as its varied approaches can foster connections across the boundaries, both social and material, confronting AI. In doing so, it can inform more pragmatic approaches to the potential transformations promised by AI’s uncritical proponents.
Accepted papers:
Session 1 Tuesday 16 July, 2024, -Paper short abstract:
A.I. models have unique predictive powers yet do not acknowledge their limitations. I will discuss consequences of this condition in NVIDIA’s digital twin of Earth. Based on research in A.I. enabled remote sensing, I will suggest how model diversity can produce counter-weights to large A.I. models.
Paper long abstract:
The debate surrounding the influence of A.I. foundation models has precedent in the earth sciences, where model limitations have long been understood as inherent to the modeling process itself. Climate scientist Reto Knutti, for example, long ago argued for the use of model diversity to bound model uncertainty.
Countering model hegemony can occur in different ways. Small A.I. models that readily share their own limitations can be effective in resource constrained environments. I will discuss some tradeoffs between computing intense, data hungry neural networks versus data sparse algorithms in the mapping contested land use conditions, and suggest how model diversity might be able to combine benefits of small models with the power of the largest systems.
Earth system models are limited by uncertainty-hiding large neural networks in unique ways. Early earth system models produced a set of estimates that require interpretation by experts. However, the latest physics informed neural networks inherent the opacity of neural networks, and add seductive visualization. Climate scientists working with systems such as NVIDIA’s digital twin of Earth, EARTH-2 can share the vast volumes of data, collapsed to a few simple metrics in earlier specialists’ models, with the public in visually compelling ways. The resulting cinematic climate crisis communication shifts the emphasis from plausible scientific inference to a visual experience of climate futures that makes scrutiny more challenging. While EARTH-2’s powerful visualizations will make climate science visually more accessible, they suggest unrealistic, on demand, countermeasures to climate change.
Paper short abstract:
Debates over the impact of AI often focus on job losses it could precipitate. This paper considers the user practice of image generation intensification and the engineering concept of model collapse to argue that full automation is unlikely as labor will continue to meet otherwise-unmet human needs.
Paper long abstract:
The rapid rise of neural-network-based artificial intelligence has led to predictions that various forms of productive human activity will be overtaken by computation. This is especially the case for what autonomist theorists characterize as so-called “immaterial labor,” understood first and foremost as a kind of symbolic manipulation. AI, in its capacity to generate text, images, and sound, is seemingly ideally suited to such tasks. As sociologist Randall Collins has argued, prior to the contemporary resurgence of AI, automation was largely thought of in terms of “blue-collar” work; now, however, it has become a pressing occupational concern for the professional and managerial classes.
This paper considers the potential for automating semio-linguistic work through the lens of labor as a shifting social definition. In this view, labor shares something with the psychoanalytic definition of desire: once fulfilled, the need for it is not sated, but shifted elsewhere. An example of this can be found in the recent trend in which users post the results of their efforts to repeatedly make image generators intensify a conceptual aspect of an image they have been asked to depict (e.g. to make an image of someone at work depict them working harder). The similarity across such images as they approach extremes of intensity speaks to a limit in machine-generated visual vocabularies that reinforces the need for human contribution. Combined with the concept of “model collapse,” in which systems trained on AI-generated content produce increasingly uncreative results, a world without human work, “immaterial” or otherwise, appears unsustainable.
Paper short abstract:
This paper addresses the lack of ‘appropriate’ data to train AI machines in many sectors, and examines the experiences of practitioners who are under pressure to prioritise feeding the machine with more and better data.
Paper long abstract:
Given the growing hype about ‘AI’ futures, more in-depth empirical research is needed to critically examine the limits and barriers to AI’s expansion in actual contexts of practice. One such limit in many sectors is the availability of appropriate data to train AI machines.
Previous research identifies a deepening “desire for numbers” (Kennedy, 2016), or data outputs, including predictions (Mackenzie, 2015), that can contribute to the stabilisation and legitimisation of knowledge claims (Fine 2007; Daipha 2015; Heymann et al, 2017). However, there has been minimal attention on the pressure to feed data into the systems that produce these numbers. That is, there has been less attention paid to the struggle to generate inputs.
A focus on inputs brings our attention to how practitioners are experiencing, and in some cases resisting, workplace pressures to ‘feed the machine’ - a term we use to allude to the data and other resources required to sustain both the AI and capitalist systems within which they are embedded.
We draw on empirical research which consists of interviews, focus groups and observations with 65 UK-based practitioners in the pharmaceutical industry, Higher Education and arts practice. Our analysis shows that AI machines are not always and straightforwardly fed abundant and appropriate data. We argue that, in some contexts of practice, efforts to develop AI in the face of such barriers lead to increased pressures on practitioners to prioritise feeding the machine with more and better data, which has implications for workplace cultures of AI practice.
Paper short abstract:
Many narratives around AI assign the capacity for agency, knowledge, prediction, and objectivity. However, AI systems adhere to information-mathematical materialities that entail epistemic limits, rendering the shiny narratives porous. We present alternative narratives that incorporate those limits.
Paper long abstract:
Artificial intelligence (AI) is currently widely discussed as a solution for many pressing issues like the climate catastrophe, global inequality, and other complex societal problems. As such, AI is flanked with many promising narratives such as its supposed capacity for agency and decision making, for knowledge structuring and recombination, for meaningful prediction, for objectivity, and for political neutrality. The possibilities of AI seem unlimited. The majority of positive AI narratives originate from proponents of transhumanism and wealthy individuals, or both, such as Elon Musk and Geoffrey Hinton. Approaching these narratives from a technical perspective, inherent epistemic questions emerge into focus: The methods behind AI systems adhere to specific information-mathematical materialities that imply certain epistemic characteristics. These characteristics correspond to the question of what can(not) be known through and, thus, achieved by AI, and point to severe limits often ignored or even concealed. Tending to the inherent epistemic limits of AI systems renders porous and pierces through the abovementioned shiny narratives. To illustrate this, we discuss two kinds of AI methods that have been protagonists of excessive AI narratives: data-driven predictions and large language models. We present alternative narratives that incorporate those properties and limits: formal token transformer tools, automated data factories or human-machine computing networks. This work draws from critical data studies, STS, data protection theory, critical computer science and contributes to understanding the limits of AI.
Paper short abstract:
Secondary use of trained AI models is currently one of the most severe regulatory gaps with regard to AI. We propose Purpose Limitation for Models, a concept that limits the use of trained AI models to the purpose for which it was originally trained and for which the training data was collected.
Paper long abstract:
Imagine medical researchers build an AI model that detects depression from speech data. Let's say the model is trained from the data of volunteer psychiatric patients. This could be a beneficial project to improve medical diagnosis, and that's why many consent to the use of their data. But what if the trained model falls into the hands of the insurance industry or of a company that builds AI systems to evaluate job interviews? In these cases, the model would facilitate implicit discrimination of an already vulnerable group.
There are currently no effective legal limitations to reusing trained models for other purposes (this includes the forthcoming AI Act). Secondary use of trained models poses an immense societal risk and a blind spot of ethical and legal debate.
In our interdisciplinary collaboration between critical AI ethics and legal studies, we develop the concept of Purpose Limitation for Models to impose suitable limits by empowering society to govern the purposes for which trained models (and training datasets) may be used and reused. Purpose limitation is originally a concept from data protection regulation, but it does not automatically apply to trained AI models, as the model data is not generally personal data. We argue that possession of trained models is at the core of an increasing asymmetry of informational power between AI companies and society. Limiting this power asymmetry must be the goal of regulation, and Purpose Limitation for Models would be a key step in that direction.