Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Mathieu Jacomy
(Aalborg University)
Ida Schrøder (Aarhus University)
Alf Rehn (SDU University of Southern Denmark)
Torben Elgaard Jensen (Aalborg University Copenhagen)
Send message to Convenors
- Format:
- Traditional Open Panel
- Location:
- Theater 7, NU building
- Sessions:
- Thursday 18 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
This panel invites submissions that inquire into the effects of AI experiments in democratic societies, as well as “making and doing” experiments with AI in STS. The key aim is to investigate the potential of employing AI for good in an STS context, whilst connected to an experimentalist ethos.
Long Abstract:
Much “algorithmic drama” has surrounded the release of AI tools in the wild. Some fear that AI will undermine or drastically challenge ordinary life in democratic societies; with warning examples including automated social security allocation, predictive policing, and biased tax fraud detection. On the other hand, the positive potential of AI has been noted, with examples including hate speech detection, better spam filters, and auto-generated accessibility tools all considered “good” uses of AI. Beyond both the fear and the hype, AI systems are developed, deployed, and repaired in everyday practices – like other infrastructures, AI is at heart "boring", relying on ordinary acts of tinkering and maintenance.
As AI has arrived in the everyday life of democratic societies, this is a timely moment to consider AI systems as experiments in democratic practice, looking to how we re-negotiate citizen's relationship with the state, accessibility to public debates, conviviality, and everyday practices.
However, STS has proven valuable not only as a commentator of such developments, but in actively “making and doing”; critiquing not from a distance, but while “getting our hands dirty”. Therefore, this panel also invites contributions that showcase practices and experiments of how STS can contribute to developing AI for social good.
Submissions may respond to the following questions:
– How can we understand AI systems/tools as experiments "in the wild" of democratic society?
– How can STS “making and doing” be understood in the era of AI, and how can it contribute to “good” AI?
– Which actors must be involved in the experiments and development of AI "for the social good"?
– What are the (experimental) interrelationships between the algorithmic logic of AI and the logic(s) of social good?
– How can we understand the experimentalist ethos of work with AI?
Accepted papers:
Session 1 Thursday 18 July, 2024, -Paper short abstract:
The paper examines four cases of AI experimentation that takes place beyond the control and orchestration of tech companies. It argues that ‘experimentation in the wild’ by a broad variety of actors should merit our attention and engagement as a form of democratization of innovation.
Paper long abstract:
Following a recent commentary by Lucy Suchman (2023), we suggest that much of the current discussion about AI has been obfuscated by the assumption that AI is a ‘thing’ that is already there. Working against this ‘misplaced concreteness’, the paper contributes to the demystification of AI by examining four cases of actors - beyond the tech companies – who are engaged in making and shaping of AI through experimentation with the construction, legitimate organization, cultural meaning, and practical use of computational techniques and technologies.
We introduce the term ‘experimentation in the wild’ to denote the broad variety of experimentation that unfolds beyond, alongside, or in conflict with tech companies’ strategic attempts to draw users into circumscribed roles as experimental subjects or beta-testers for the companies’ platforms and tools.
We argue that experimentation in the wild is key to understanding what AI is in practice and how it unfolds. We also argue that experimentation in the wild is a desirable feature of AI development, since it multiplies the perspectives on AI and the stakes in it. Experimentation in the wild is thus a form of democratization, in the Deweyan sense of democracy as an ongoing collective process of inquiry and contestation (Dewey 1927).
In the final part of the paper, we propose a list of ideals for democratically legitimate experimentation with AI. We stress the importance of addressing real-world tensions and controversies, ensuring stakeholder participation and enfranchisement, and fostering pluralism in the scope of experimentation.
Paper short abstract:
This study compares AI controversies in the four countries. The analysis identified "friction objects" i.e. sites of demonstrable harm, trouble or contestation, and AI controversialisation logics: linked to AI contextual deployment in specific social environments and on society-wide circulation.
Paper long abstract:
This paper reports on a comparative study of AI & Society controversies across four countries: the United Kingdom, Canada, France and Germany, all Northern countries whose relationship to AI could be summed up as "not the US or China." Adopting a standpoint approach to controversy analysis, we conducted an online consultation in the four countries with experts in AI & Society, with "experts" defined as "all those with experience of the issue and committed to genuine debate." (Ravetz & Funtowitz, 1997). The consultation responses suggest that many recent AI & society controversies in the selected countries do not fit the classic sociological definition of controversy, in terms of the public staging of expert disagreement. They instead identify "AI frictions," in situ incidents in which an AI-based system, company, or deployment becomes the object of demonstrable harm, trouble or contestation (see Ananny, 2022; Ricci et al, 2022). Another prominent set of responses identified a single, general purpose technology: Large Language Models like ChatGPT. We discuss our analysis of the results, in which we sought to determine degrees of controversiality for different controversies, by mapping topic-friction couplings (Costas et al, 2023). This led us to identify 2 different "logics" for the controversialisation of AI: one focused on the contextual deployment of AI-based systems in specific environments in society, another on the society-wide circulation of general purpose AI. To conclude, we reflect on the extent to which recent AI & society debates reproduce technology-centric definitions of AI.
Paper short abstract:
The paper presents an LLM-based tool for assessing Open Source Investigations (OSI), developed in collaboration with OSI practitioners. The project sparks critical discussion on the role of LLMs in evaluating information and establishing trust, and is informed by STS-scholarship on truth production
Paper long abstract:
Open Source Investigations (OSI) leverage the abundance of digital data for investigative and conflict reporting, especially crucial in the “post truth” era wherein experts and non-experts alike produce narratives of all sorts about “what actually happened”. OSI engage with this complexity and reclaim the internet for evidence production, emphasizing information sources, providing transparent methodologies, and inciting the participation of digital publics.
Given the known proficiency of Large Language Models (LLMs) in classifying and analyzing vast datasets, our work explores the intersection of LLMs with OSI. Together with OSI investigators, we developed an AI tool that leverages LLMs for the assessment of OSI. The project first involved the tracing of different practices, strategies and communication styles that resulted in a database for assessing claims based on OSI. This database was subsequently programmed into a GPT-style tool.
In the paper, based on insights from STS that facts are not merely discovered but constructed through associative processes, we reflect upon how OSI communities evaluate things such as broken links and the curation of images and we discuss how we may rely on machine learning. Our project, besides being of practical use, serves as an occasion to discuss the role of LLMs in the context of information and trust against the backdrop of STS-scholarship on truth production.
Paper short abstract:
We present the experiment of creating an atlas of algorithms to facilitate a more democratic and less hyped engagement with AI. We perform AI as mundane in two ways: by exposing the extent to which it gets blackboxed in the context of science; and by using it as a pragmatic research companion.
Paper long abstract:
In her recent commentary questioning “the uncontroversial thingness of AI”, Lucy Suchman
(2023) argues for the need to ask more mundane questions about specific algorithms in
concrete situations. What are they doing? Should they be doing it? Could it be otherwise?
Reifying AI as a technology only contributes to the hype and prevents a better democratic
engagement with the myriad of issues that arise in diverse socio-technical circumstances.
In this paper we reflect on how to make machine learning algorithms boring again, in the sense
of counteracting hype narratives, be they celebratory or doom-and-gloomy. To do so, we build a
map of what algorithms are doing in the scientific literature, complete with qualitative
annotations exposing their purpose and agency across a wide range of situations.
Carrying out this annotation project on such an extensive corpus entailed collaborating with a
large language model, to summarize sets of highly specialized scientific abstracts in a manner
that is intelligible to a non-expert audience.
We thus perform AI as doubly mundane. As a technology made invisible by its own success, in
the context of scientific publications, as displayed by the atlas; and as a pragmatic means to
translate specialized documents into relatable annotations, a coding that a human agent could
carry out better but not at such a large scale (thousands of summaries).
Paper short abstract:
This paper contributes to ongoing discussion on how solvable AI ethics interrelates with situated social work ethics as a fair algorithmic model is employed as support for voluntary counselling of vulnerable children in a Scandinavian NGO.
Paper long abstract:
Can we design a fair AI model that is precise enough to identify and draw distinctions in the causes of social problems, experienced by individuals? This is the question a Scandinavian NGO embarked to answer as they started collaborating with data-scientists from a tech firm to develop an algorithm to assist their voluntary counsellors in online communications with vulnerable children. However, what seemed to be fair in the hands of the developers, turned out to (sometimes) produce unfair outcomes in the hands of the voluntary counsellors. With the paper, we contribute to ongoing discussion on how practices develop as they are confronted with computational problem-solving (Lin & Jackson, 2023; Ruckenstein, 2023). Rather than judging what was wrong about the “fair algorithm”, we, in this paper, take it as an opportunity to investigate what happens, when data ethics and social work ethics interrelate in practices of employing AI tools for the good of society. Drawing on ethnographic fieldwork, we trace the ethical frictions produced by the algorithm, as it is translated (Latour, 2005) from being an ethically fair model that solves problematic issues with biased counselling and timeliness into being an ethically unfair model that hides away the importance of situated matters such as religion and the slowness of conversations. Somewhat to our surprise, the ethical frictions became constructive sites for advancing a new vocabulary for a relation ethics, through which the goodness of the fair model was continuously questioned and improved.
Paper short abstract:
AI-composed music is validated by its likeness to existing musical repertoires, thus inherently conservatively biased. Avant-garde music explicitly aims to overthrow existing repertoires and seeks recognition of a different kind. How do these validations compare, and what symbiosis is possible?
Paper long abstract:
The accomplishments of AI in the field of music composition (AIM) have so far been underwhelming. For example, only mediocre chorales in the style of J.S. Bach have so far been produced, which is considered only a very basic compositional task. AIM is mostly validated by its success at mimicking existing repertoires, evaluated either by human listeners ('Turing test') or data-analytical methods, Bach chorales being a prominent test case. The practice of AIM thus comprises a bias towards the conservative and towards 'easy listening' music: music is good if enough people accept it as beautiful music.
Yet, advances in music have historically not been made in easy listening. Avant-garde music (AGM) are all those works, composers and movements where the purpose is not to have people 'like' the work but rather to push boundaries, confront audiences with those very boundaries and express messages that have an importance regardless of whether or not they are likeable.
How does validation in contemporary avant-garde music production relate to validation of AIM? What are the biggest challenges for AIM to accomplish such creative work? What challenges does AGM production experience from emerging AIM technologies? Do creative symbioses between human composers and AIM systems emerge, and how are their relative contributions negotiated?
Thinking through the relation between AGM and AIM is crucial to answer questions of creativity and critique. It stands model for a broader problematic of how AI relates to critical thinking, which is a crucial ingredient of citizenship, democracy and public life.