Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Francesco Miele
(University of Trieste)
Stefania Milan (University of Amsterdam)
Simone Arnaldi (University of Trieste)
Send message to Convenors
- Format:
- Traditional Open Panel
- Location:
- HG-05A00
- Sessions:
- Wednesday 17 July, -, -, Thursday 18 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
How can STS contribute to (re)make the transformations brought about by the advance of AI in society? We welcome empirical, theoretical and methodological contributions helpful for providing non-deterministic, anti-heroic and ‘flat’ narratives about AI and ongoing societal changes.
Long Abstract:
The recent renaissance of Artificial Intelligence (AI) and its growing relevance in public debate have fueled never died hopes and fears about the disruptive role of innovative machines in our societies. Whether social scientists have often partaken in the deconstruction of seductive myths concerning AI and its miraculous role in tackling pivotal issues of modern societies, this open panel aims to examine how STS can contribute to govern the transformations emerging around AI technologies. First, we encourage STS researchers to explore the advance of AI in high-stake areas - including education, employment, credit scoring, entertainment, environment preservation, cultural consumption and healthcare – providing accounts that move away from the dystopian and utopian narratives very common in popular science as well as in certain academia. Where enthusiastic and critical positions about AI have been often sustained by technological or social determinism, in contrast we welcome theoretical and empirical contributions that adopt a ‘flat’ ontology that put all forms agency on a par, without taking for granted that power asymmetries and transformative capabilities belong to specific human and non-human actors. Second, we invite scholars to reflect on the methodological ability of STS to give voice to the voiceless, supporting the theoretical and empirical efforts of proposing anti-heroic and non-deterministic narratives about AI. We believe that this can happen developing methods and techniques that, one the one hand, explore the power of acting of actors generally excluded by the grand narratives about AI and, on the other hand, involve these actors in the enactment and the evaluation of AI technologies. Possible topics include but are not limited to:
• Deconstruction of seductive myths
• Appropriation/resistance practices from below
• AI and vulnerable populations
• Non-human world(s)
• Failures/pitfalls of AI
• Power asymmetries and re-negotiations
• Digital/computational methods
• Post-qualitative and Participatory research
Accepted papers:
Session 1 Wednesday 17 July, 2024, -Short abstract:
We argue that transitions from classic AI to GenAI were neither obvious nor inevitable. By rethinking “appropriation” in STS, we suggest that there was a wider range of actors, artifacts, and imaginaries, whose actions greatly impacted GenAI’s trajectory. Cui bono? Who benefits? At whose expense?
Long abstract:
While GenAI has elicited widely opposing views in various [intellectual] camps, from boosters and detractors alike, there has been little cross-disciplinary dialog particularly attuned to the complex and sustained dynamics of technology transformation and their implications within AI worlds.
We draw on appropriation—a concept widely used in STS and human-computer interaction (HCI)—defined as the adaptation or modification of artifacts by users in concert with others and with things. STS and HCI have traditionally studied appropriation using “micro” lenses, generating useful insights on user [human] action around artifacts.
In our work, we expand appropriation— through the “appropriation matrix,” a novel theoretical tool we have recently developed to study appropriation in global HCI [1]—to account for broader aspects such as continuities and discontinuities in technology use and design, and dynamic regulatory shifts and changes. To this effect, we foreground three elements of appropriation: Users, artifacts, and imaginaries. That is, appropriation in GenAI worlds includes a changing cast of users including ordinary people and research labs (both can appropriate technology!); artifacts that range from neat chat interfaces to complex AI models (where artifacts and infrastructures are recursively entangled); imaginaries which users mobilize to rationalize or motivate their actions and practices (and these range from mundane stories and folklores to towering corporate and statist policies). The orientation of the appropriation matrix towards technology transitions, with their world-disclosing properties, not only exposes tensions within GenAI worlds but also highlights who stands to benefit and at whose expense.
[1] “Rethinking appropriation,” CHI 2024, forthcoming.
Short abstract:
This paper discusses how (non-)users define 'good' interaction with voice assistants based on an empirical qualitative analysis. It draws on different registers of value and indicates how VAs, as digital technology, and valuation strategies shape each other.
Long abstract:
Smart technology and artificial intelligence have become increasingly prevalent in our daily lives. Voice assistants (VAs) have entered our private households, offering time- and effort-saving benefits. However, attitudes and behaviors towards VAs remain contradictory among users and non-users. This research work employs actor-network-theory (ANT) (Michael 2017, Law 2008) and takes a symmetrical approach by focusing on both non-human actors, such as voice assistants, and human actors, including (non-)users. To gain deeper insights into the practices and relationships between VAs and (non-)users, the author conducted a qualitative analysis that combined card-based group discussions (Felt et al. 2018), and problem-centered interviews (Lueger 2010). The findings indicate that participants’ practices of appropriation and resistance are primarily based on classification and valuation strategies. The way in which (non-)users perceive and give meaning to these relationships is normative, based on their definition of ‘good’ interactions. This research shows how participants justify their definition, by drawing on different registers of value in the sense of valuing as practice (Heuts & Mol 2013). Additionally, participants construct and negotiate responsibility around the values that define ‘good’ interaction with voice assistants. Finally, the findings indicate how VAs, as digital technology, and valuation strategies, shape each other. This contribution adds to ongoing discussions in the field of STS and valuation studies.
Short abstract:
This research engages with (im)possible 'knowledge bodies' in large language models (LLMs). It rethinks voices and narratives in AI through and with LLMs in experimental conversations. This post-qualitative method offers new insights and resistance to grand narratives through feminist approaches.
Long abstract:
This research explores the formation of 'knowledge bodies' that are engendered in/with large language models (LLMs) currently available on the Internet. LLMs are analysed as approximations of AI, following the industry's optimistic grand narratives and occasional controversies, such as the fired Google researcher Blake Lemoine's claims about LaMDA's sentience. Extending the concerns for problematic sourcing of training data, the exclusion of specific experiences, and the concentration of machine learning innovation in the Silicon Valley, the suggestion here is that the non-, weird- or (im)possible embodiment of datasets and language models has important implications for STS research on AI, including flat ontological perspectives on bodies and data, as well as the possibility for resistance through feminist and decolonial approaches. Combining cultural studies of data with data science techniques and performative experimentation with Llama, this research will document experiments in conversations with generative chatbots as an innovative post-qualitative method. These will pick up on feminist concerns for (im)possible bodies (Rocha and Snelting, 2022), imitation (Kind, 2022), bodies of water (Neimanis, 2017) bodies of work, knowing bodies and so on. It is an invitation to think about (im)possible embodiment as tactics for refusing and complicating the binary choice between technocratic and technophobic narratives around data.
Short abstract:
In the context of Large Language Models (LLMs), I describe Participatory Prompting: a novel approach to the rising discipline of prompt engineering, that attempts to redistribute user agency and subvert popular deterministic narratives on technological omniscence and algorithmic fetishism.
Long abstract:
This contribution suits the suggested topics of distributed agency and participatory methods. In this abstract, I describe an empirical and methodological effort, which I am currently developing as part of my PhD project at the University of Edinburgh, to re-imagine and re-design the rising discipline of prompt engineering in Artificial Intelligence (AI). In the context of Large Language Models (LLMs), prompt engineering refers to finding the most appropriate input – or prompt – to allow the model to solve a particular task (Liu et al., 2023, p.1; White et al., 2023). Due to the capability of LLMs to generate, under certain conditions, novel textual instances that may appear humanlike, prompt engineering is often sensationalised through popular narratives on omniscience and algorithmic fetishism (Luitse and Dankena, 2021). Particularly relevant to the performativity of LLMs are sociotechnical accounts of computers as “thinking machines”, associated with promises of efficiency, rationality and objectivity (Alexander, 1990, p.162; Natale and Ballatore, 2020). Alexander (1990) draws a parallel between computational technologies and sacred entities, suggesting the existence of imagined associations between sophistication and awesomeness. In my work, I attempt to subvert deterministic narratives on prompt engineering and text generation through running workshops on what I refer to as Participatory Prompting. This effort observes the Design Justice framework, suggesting that individuals and communities directly affected by the functionality of technological artefacts should form stances on technology design (Costanza-Chock, 2020). In these workshops, participants discuss relevant dimensions of their identity and co-design values-oriented prompts for an open-source LLM.
Short abstract:
Based on an ethnographic enquiry at the BBC, this paper contributes to the demystification of AI through a situated and non-deterministic account of the epistemic techniques employed to know and collaborate around an AI system in the making and the epistemological politics that shape this process.
Long abstract:
AI systems are generally portrayed as powerful yet abstract and inscrutable entities. In reaction to these dramatising portrayals, STS scholars have provided alternative, ‘grounded’ narratives of AI systems focussing on the practical and localised efforts required to make AI systems work. Building on these efforts to demystify AI, this paper provides a situated account of AI systems in the making. It ethnographically traces the everyday work and decisions of data scientists, engineers, product managers and editors within the BBC as they collaborate to develop recommender systems that can better distribute their vast collections of content. Whereas the existing literature has helped to ground the study of AI in the materiality of hardware and infrastructure as well as the socio-material labour of producing datasets for AI systems, this paper highlights a different socio-material practice. When making new AI systems, localised epistemic techniques are employed to enable different actors to ‘know’ and collaborate around emerging AI systems. Particularly, the paper highlights the role of visualisations as techniques of knowing, as different visualisation tools are often at the centre of the collaborative practices. By analysing observational and interview data from an ethnographic enquiry at the BBC conducted from September 2023 to February 2024, the paper shows how the visualisation tools shape the development of AI systems by enabling the actors to see certain 'particularities' of the system, while also abstracting away other ways of knowing the system. Thereby, the paper sheds light on the epistemological politics that shape the making of AI systems.
Short abstract:
The paper focuses on 'voice cloning' as an pattern of appropriation of artificial intelligence by end-users in the music sector. Adopting the notion of "AI in the wild", it specifically addresses how this pattern of AI appropriation is becoming a source of new business models in the music industry.
Long abstract:
The paper addresses a distinctive pattern of adoption of artificial intelligence in the music sector, focusing specifically on the practice of unauthorised 'voice cloning'. Since 2023, the possibility for anonymous social media end-users to use AI tools to produce music that mimics the voices of established artists has become visible, sparking a number of explicit controversies. The cloning of artists' voices to produce new music led to calls for new forms of sanctioned infringements, not related to the content of a song, but to the voice and identity of an artist. While music industry denounced the illegal appropriation of artists' sonic identity by end-users, establishing contrasting activities against it, industry also began to test new business models to commercially exploit the possibilities offered by AI in relation to the use of AI-based voices.
Drawing on STS literature on the role of end-users in innovation processes (Oudshoorn and Pinch 2003; Hyysalo et al. 2016), the paper outlines the emergence of practices, tools and controversies related to the production of music based on voice cloning. Furthermore, it adopts notions such as 'AI in the wild' and 'outlaw innovation' (Soderberg 2016) to foreground the role of end-user appropriation practices in shaping patterns of innovation related to AI. The case of voice cloning and the focus on 'AI in the wild' allow us to highlight the role of appropriation processes and practices from below, thus contributing to the STS understanding of the transformations emerging around AI technologies.
Short abstract:
The presentation explores the co-construction of AI systems and users in cultural production. Drawing on in-depth interviews with professionals and STS concepts from user studies, it identifies distinct co-construction patterns, offering a nuanced understanding of human-machine collaboration.
Long abstract:
The presentation addresses issues related to how Generative AI (GenAI) systems and their users shape each other in the context of cultural production. It raises the question of how STS can provide a nuanced understanding of the socio-technical practices through which human and non-human actors establish relationships in situated contexts, focusing in particular on professional users in the cultural industries who have integrated GenAI systems into their work. In doing so, it adopts the perspective of the “co-construction” of users and technology (Oudshoorn & Pinch 2003) to highlight the different geographies of responsibility that emerge from the interactions between GenAI systems and their users, and shows how adopting an STS perspective on the user-technology relationship can help deconstruct simplistic interpretations of AI systems as neutral “tools” or, on the contrary, as heroic “autonomous agents”, which characterize widespread interpretations of the use of AI in art and cultural production. Drawing on insights from in-depth semi-structured interviews conducted with professionals within the Italian context, the presentation aims to shed light on the ways in which GenAI users can be configured by design, but also renegotiate their role by de-inscribing AI technology, developing anti-programs, and organizing movements of technological resistance. The findings reveal distinct patterns of co-construction, highlighting the need to problematize human-machine “collaboration” in the context of cultural production.
Short abstract:
Despite being considered a major language for AI in the 80s, Prolog is now viewed as a minor language. Our analysis correlates its decline with the Japanese Fifth Generation Computer project. This case prompts a discussion on the broader impact of unmet promises in AI on foundational technologies.
Long abstract:
The Prolog programming language, conceived in 1972 at the University of Marseille, introduced the ‘logic-programming’ paradigm well-suited for symbolic artificial intelligence (AI) applications (Colmerauer 1993). In 1982, Prolog was selected as the main programming language for an ambitious Fifth Generation Computer Systems (FGCS) project led by the Japanese government which triggered the “First AI Arms Race” (Garvey 2020) in the USA and Europe. Despite its popularity in the 1980s, Prolog faded into obscurity in the 1990s, with actors' narrative linking its fall to that of the FGCS project (van Emden 2010).
To what degree can Prolog be considered a collateral victim of the FGCS project? How is the scientific destiny of languages determined by the institutional and industrial framework of the projects that rely on them?
We conducted a comprehensive analysis using both quantitative and qualitative approaches. We gathered metadata from hundreds of articles in the ACM Digital Library and cross-referenced this data with proceedings and field reports from the FGCS project, supplementing our findings with testimonies from researchers involved at the time.
Findings reveal a correlation between Prolog's publication volume and the perceived state of the FGCS project. Surprisingly, only a small fraction of Prolog publications were directly associated with fifth-generation computers; instead, Prolog found its stronghold in expert systems and database applications. Irrespective of its practical applications, Prolog became indelibly linked with the Japanese initiative.
Discussing the Prolog case aims to open a dialogue on the impact of unrealistic techno-scientific promises (Joly 2013) in AI on underlying technologies.
Short abstract:
The transformative effect of data and AI affects almost all areas of our everyday life and knowledge economies. Research must not only comment from the sidelines, but engage where the technological change manifests. This paper shows examples for studying up close and intervening effectively.
Long abstract:
The field of critical data & AI studies correctly questions the claim to objectivity, efficiency, and techno-solutionism made by the big tech sector and media discourses. However, two problems seem to be apparent here: a) the narrative is dominated by US American perspectives with societal institutions, governments and a technology sector widely different from the situation in the various EU countries; b) research often falls short in studying algorithms up close and within their socio-economic context of the organisations deploying them. The authors of this paper modelled their research practice consequently different. They immerse in public management organisations and media industries to study up close not only the discourses on AI but the actual practices of implementation, use and governance of AI systems. The researchers do not enter as mere observers but as experts in governing AI systems which allows them to also intervene and to take part in shaping the way algorithms are deployed in these organisations. They developed a strong track record of societal impact through informing policy, developing widely used tools for design, evaluation and assessment of algorithms, and creating learning formats for professionals. Drawing from STS and action research, this paper discusses methods for investigating and shaping the digital society. It discusses the benefits and pitfalls of this research practice, the privileged insights, the potential for societal impact, the learning opportunities for students and professionals, but also issues of complicity, dependence, and the changing role of the researcher and their academic host institution.
Short abstract:
This study examines AI's role in addressing the environmental crisis as portrayed in world- leading science journals and challenges the prevailing technological optimism.
Long abstract:
Whatever AI may actually be, it is by now presented by many, including scientists and engineers, as relevant -if not the key- to addressing the environmental crisis. Building on ongoing research of an Athens STS research team on the rhetoric surrounding AI and, further, the way energy renewability may be actually defined, we propose to present a paper on the way the connection between AI and the environmental crisis is being portrayed in world-leading science journals, like Nature, Science, Scientific American, New Scientist. The research to be presented can usher in exposing the flaws in the current technological optimism (frequently: techno solutionism) by closely examining what counts as AI in this literature and how exactly it is supposed to save the environment. Moreover, from the other end, how the environmental crisis is framed in the context of being presented as something that can be addressed by AI.
Short abstract:
This paper draws on fieldwork with scientists to examine the production and use of AI and machine learning for wildlife conservation, analyzing the datafied representation of non-human animals and considering their convergence and divergence from the representations of humans through AI tools.
Long abstract:
AI and machine learning have become key tools for ecology and conservation. As these tools are deployed alongside new recording and tracking devices they have turned previously inaccessible aspects of non-human animal life and behavior into data for conservation projects. Collectively, AI for conservation projects are intended to provide data and analysis that will enable interventions in ecosystems in order to improve them. Implicit in these initiatives is the idea of a better Anthropocene for nonhumans, one in which the human capacity to transform the world is used to improve, rather than degrade ecosystems. In this paper, I draw on fieldwork with several groups of scientists to examine the production of AI for wildlife, making use of STS and multispecies ethnography to attend closely to the role of actants beyond the human, looking at how wildlife, objects, and ideas are all drawn into networks of practice. This paper examines how big data and AI for ecology privilege particular aspects of animal life, but also how the actual animal continues to matter a great deal. Through fieldwork and focus on the situated lives and experiences of wild animals, this paper will attend to the kinds of animal subjectivity and experience that exceed what can be captured in datafied representations of animal lives and will consider the divergences and convergences between AI for wildlife and AI for humans.
Short abstract:
Exploring citizen’s agency in platform urbanism, we critique tokenism in smart city imaginaries. We advocate participatory methods to engage citizens and challenge power imbalances in data-driven urban AI systems, based on Feenberg’s critical constructivism.
Long abstract:
There is a long history of industry and states attempting to frame and set expectations about future cities and the possible disruptive impact of smart systems and digitalisation. The recent narrative of ‘platform urbanism’ is based on the so-called ‘pivot to platforms’ (Barns, 2019), given the advance of platformisation, algorithms and Artificial Intelligence (AI) in the context of cities. In our paper we critically discuss and examine ‘the right to the smart city’ in the context of urban platformisation, i.e. how to go beyond tokenism by empowering citizens and how this can transform current market led imaginaries to smart cities into more equitable and sustainable cities (Mansell, 2012; Cardullo et al., 2019). For this we build on Feenberg’s critical theory of technology and the notion of ‘technical citizenship’ (2017), stressing the agency of citizens and how they can contribute to the construction and usage of data-driven AI platforms in a municipal context.
We illustrate this approach by comparing our experiences in two different countries applying two innovative participatory methods for offering citizens a voice: ‘walkshops’ and ‘citizens think-ins’. These relatively low technology methods have proved effective to make citizens and other municipal stakeholders more aware and better understand smart infrastructures in their cities, and offer a potential route to influence more effectively academic, corporate and city decision makers. We demonstrate how critical constructivism can practically contribute to understanding and confronting transformations and power asymmetries, emerging around data-driven AI systems in cities.
Short abstract:
This contribution argues for the merits of investigating how co-designed AI/ML-based technologies are enacted by the practices and interpretations of different stakeholders within co-design processes.
Long abstract:
Today eHealth interventions are often provided through digital platforms, i.e., unneutral, infrastructural elements, with specific socio-cultural norms, business goals and political relations embedded in their architecture (Schwennesen, 2019; Pronzato, 2023; Torenholt and Langstrup, 2023). The current expansion of AI/ML-based systems in healthcare is no exception in this regard, as partial and discriminative accounts of social life can be reproduced by these technologies (Crawford, 2021).
Recently, co-design has emerged as a widespread participatory method to produce eHealth technologies that can empower patients and caregivers (Dietrich et al., 2021). However, participation in technological development can be considered as a “matter of concern” (Andersen, 2015, c.f., Latour, 2004), and not considering “the micro-politics of the relations that are built-in co-design” (Huybrechts et al., 2020, p. 3) may risk reproducing rather than overcoming power asymmetries (Donia and Shaw, 2021).
Starting from the co-design of an e-learning platform for informal caregivers of patients with dementia (project AGE-IT, PNRR PE8 “Age-It”), this contribution bridges perspectives from STS, health sociology, critical algorithm studies and co-design. Specifically, drawing on Seaver’s (2017, c.f. Mol, 2002) conceptualization of algorithmic technologies as artifacts “culturally enacted by the practices people use to engage with them” (p. 5), it argues for the merits of investigating how co-designed AI/ML-based technologies are enacted by the practices and interpretations of different stakeholders, e.g., IT designers, caregivers, patients, doctors, etc.
In this scenario, a re-politicization of co-design emerges as essential to help respond to value conflicts and translate STS to more robust participatory practices.
Short abstract:
The proposed paper investigates from an ethnographic perspective, the development and use of AI in the financial markets. It focuses on so-called robo-advisors, autonomous trading systems that take on the role of human portfolio managers.
Long abstract:
The proposed paper investigates from an ethnographic perspective, the development and use of artificial intelligence (AI) in the financial sector in the context of Germany, and the European Union more broadly. The study is framed by STS related questions pertaining to notions of enactment of AI and practices of prediction within regimes of anticipation. As a case study, the paper focuses on so-called robo-advisors, partly autonomous trading systems that take on the role of human portfolio managers and pursue quantitative investment strategies. While most decisions and market interactions have been automated, the human remains firmly 'in-the-loop'. Through these assemblages, of human and non-human actors, new forms of expertise emerge alongside more traditional economic knowledge, producing and engaging with new kinds of data to make and un-make markets. Concepts such as risk, responsibility, and accountability are re-negotiated and situated within new kinds of digital practices and infrastructures. The case of robo-advisors is of particular interest, as it intersects with a broader process of financialization and marketization by providing access to financial markets to private individuals or retail investors who seek to ensure retirements and pensions through Web applications or Smartphone apps. The proposed paper draws on work in progress and as such invites further discussion and comments on preliminary findings.