Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Francesco Miele
(University of Trieste)
Stefania Milan (University of Amsterdam)
Simone Arnaldi (University of Trieste)
Send message to Convenors
- Format:
- Traditional Open Panel
Short Abstract:
How can STS contribute to (re)make the transformations brought about by the advance of AI in society? We welcome empirical, theoretical and methodological contributions helpful for providing non-deterministic, anti-heroic and ‘flat’ narratives about AI and ongoing societal changes.
Long Abstract:
The recent renaissance of Artificial Intelligence (AI) and its growing relevance in public debate have fueled never died hopes and fears about the disruptive role of innovative machines in our societies. Whether social scientists have often partaken in the deconstruction of seductive myths concerning AI and its miraculous role in tackling pivotal issues of modern societies, this open panel aims to examine how STS can contribute to govern the transformations emerging around AI technologies. First, we encourage STS researchers to explore the advance of AI in high-stake areas - including education, employment, credit scoring, entertainment, environment preservation, cultural consumption and healthcare – providing accounts that move away from the dystopian and utopian narratives very common in popular science as well as in certain academia. Where enthusiastic and critical positions about AI have been often sustained by technological or social determinism, in contrast we welcome theoretical and empirical contributions that adopt a ‘flat’ ontology that put all forms agency on a par, without taking for granted that power asymmetries and transformative capabilities belong to specific human and non-human actors. Second, we invite scholars to reflect on the methodological ability of STS to give voice to the voiceless, supporting the theoretical and empirical efforts of proposing anti-heroic and non-deterministic narratives about AI. We believe that this can happen developing methods and techniques that, one the one hand, explore the power of acting of actors generally excluded by the grand narratives about AI and, on the other hand, involve these actors in the enactment and the evaluation of AI technologies. Possible topics include but are not limited to:
• Deconstruction of seductive myths
• Appropriation/resistance practices from below
• AI and vulnerable populations
• Non-human world(s)
• Failures/pitfalls of AI
• Power asymmetries and re-negotiations
• Digital/computational methods
• Post-qualitative and Participatory research
Accepted papers:
Session 1Daniel Mwesigwa (Cornell University) Christopher Csikszentmihalyi (Cornell University)
Long abstract:
While GenAI has elicited widely opposing views in various [intellectual] camps, from boosters and detractors alike, there has been little cross-disciplinary dialog particularly attuned to the complex and sustained dynamics of technology transformation and their implications within AI worlds.
We draw on appropriation—a concept widely used in STS and human-computer interaction (HCI)—defined as the adaptation or modification of artifacts by users in concert with others and with things. STS and HCI have traditionally studied appropriation using “micro” lenses, generating useful insights on user [human] action around artifacts.
In our work, we expand appropriation— through the “appropriation matrix,” a novel theoretical tool we have recently developed to study appropriation in global HCI [1]—to account for broader aspects such as continuities and discontinuities in technology use and design, and dynamic regulatory shifts and changes. To this effect, we foreground three elements of appropriation: Users, artifacts, and imaginaries. That is, appropriation in GenAI worlds includes a changing cast of users including ordinary people and research labs (both can appropriate technology!); artifacts that range from neat chat interfaces to complex AI models (where artifacts and infrastructures are recursively entangled); imaginaries which users mobilize to rationalize or motivate their actions and practices (and these range from mundane stories and folklores to towering corporate and statist policies). The orientation of the appropriation matrix towards technology transitions, with their world-disclosing properties, not only exposes tensions within GenAI worlds but also highlights who stands to benefit and at whose expense.
[1] “Rethinking appropriation,” CHI 2024, forthcoming.
Bettina Pospisil (University of Continuing Education Krems)
Long abstract:
Smart technology and artificial intelligence have become increasingly prevalent in our daily lives. Voice assistants (VAs) have entered our private households, offering time- and effort-saving benefits. However, attitudes and behaviors towards VAs remain contradictory among users and non-users. This research work employs actor-network-theory (ANT) (Michael 2017, Law 2008) and takes a symmetrical approach by focusing on both non-human actors, such as voice assistants, and human actors, including (non-)users. To gain deeper insights into the practices and relationships between VAs and (non-)users, the author conducted a qualitative analysis that combined video-analysis (Reichertz & Englert 2011), card-based group discussions (Felt et al. 2018), and problem-centered interviews (Lueger 2010). The findings indicate that participants’ practices of appropriation and resistance are primarily based on classification and valuation strategies. The way in which (non-)users perceive and give meaning to these relationships is normative, based on their definition of ‘good’ interactions. This research shows how participants justify their definition, by drawing on different registers of value in the sense of valuing as practice (Heuts & Mol 2013). Additionally, participants construct and negotiate responsibility around the values that define ‘good’ interaction with voice assistants. Finally, the findings indicate how VAs, as digital technology, and valuation strategies, shape each other. This contribution adds to ongoing discussions in the field of STS and valuation studies.
Yu Sang (University of Amsterdam)
Long abstract:
The anthropomorphization of communicative AI shows a tendency toward feminization. However, in the research about female communicative AI, communication history is always absent. Similarly, the research on human-AI communication sometimes lacks the gender perspective. This paper will focus on the interconnected connotations of communication, medium, and female to approach the feminization of communitive AI. It navigates how the concept of female interplays with the changing meanings of communication and mediums transformed when considering non-human communicative subjects.
In the first part, I will trace the developing history of communicative AI. Drawing insights from computer science, STS studies, and communication studies, I explore the meanings of communication, machine intelligence, and the human mind, which are highly masculine in communicative AI’s early development. In the second part, I move to the notion of the female medium, tracing the histories of female-mediating communication and connecting it with contemporary female communicative AI. In different historical periods, women occupied an intermediary position between the living and the dead, the corporeal and the spiritual, and continued their positions between the human and the machine. I argue that the persistent figuration of the female medium serves as an allegorical reflection of ambivalent attitudes towards contemporary human-AI communication: the desire for perfect communication and the anxiety when confronting an unknowable yet powerful machine. Meanwhile, the gender stereotypes about women, such as being mentally inferior, emotionally, and inattentive, persist in female communicative AI and continuously produce a real impact on female labor.
Selena Savic (University of Amsterdam)
Long abstract:
This research explores the formation of 'knowledge bodies' that are engendered in/with large language models (LLMs) currently available on the Internet. LLMs are analysed as approximations of AI, following the industry's optimistic grand narratives and occasional controversies, such as the fired Google researcher Blake Lemoine's claims about LaMDA's sentience. Extending the concerns for problematic sourcing of training data, the exclusion of specific experiences, and the concentration of machine learning innovation in the Silicon Valley, the suggestion here is that the non-, weird- or (im)possible embodiment of datasets and language models has important implications for STS research on AI, including flat ontological perspectives on bodies and data, as well as the possibility for resistance through feminist and decolonial approaches. Combining cultural studies of data with data science techniques and performative experimentation with Llama, this research will document experiments in conversations with generative chatbots as an innovative post-qualitative method. These will pick up on feminist concerns for (im)possible bodies (Rocha and Snelting, 2022), imitation (Kind, 2022), bodies of water (Neimanis, 2017) bodies of work, knowing bodies and so on. It is an invitation to think about (im)possible embodiment as tactics for refusing and complicating the binary choice between technocratic and technophobic narratives around data.
Anna Schjøtt Hansen (University of Amsterdam)
Long abstract:
AI systems are generally portrayed as powerful yet abstract and inscrutable entities. In reaction to these dramatising portrayals, STS scholars have provided alternative, ‘grounded’ narratives of AI systems focussing on the practical and localised efforts required to make AI systems work. Building on these efforts to demystify AI, this paper provides a situated account of AI systems in the making. It ethnographically traces the everyday work and decisions of data scientists, engineers, product managers and editors within the BBC as they collaborate to develop recommender systems that can better distribute their vast collections of content. Whereas the existing literature has helped to ground the study of AI in the materiality of hardware and infrastructure as well as the socio-material labour of producing datasets for AI systems, this paper highlights a different socio-material practice. When making new AI systems, localised epistemic techniques are employed to enable different actors to ‘know’ and collaborate around emerging AI systems. Particularly, the paper highlights the role of visualisations as techniques of knowing, as different visualisation tools are often at the centre of the collaborative practices. By analysing observational and interview data from an ethnographic enquiry at the BBC conducted from September 2023 to February 2024, the paper shows how the visualisation tools shape the development of AI systems by enabling the actors to see certain 'particularities' of the system, while also abstracting away other ways of knowing the system. Thereby, the paper sheds light on the epistemological politics that shape the making of AI systems.
Paolo Magaudda (University of Padova)
Long abstract:
The paper addresses a distinctive pattern of adoption of artificial intelligence in the music sector, focusing specifically on the practice of unauthorised 'voice cloning'. Since 2023, the possibility for anonymous social media end-users to use AI tools to produce music that mimics the voices of established artists has become visible, sparking a number of explicit controversies. The cloning of artists' voices to produce new music led to calls for new forms of sanctioned infringements, not related to the content of a song, but to the voice and identity of an artist. While music industry denounced the illegal appropriation of artists' sonic identity by end-users, establishing contrasting activities against it, industry also began to test new business models to commercially exploit the possibilities offered by AI in relation to the use of AI-based voices.
Drawing on STS literature on the role of end-users in innovation processes (Oudshoorn and Pinch 2003; Hyysalo et al. 2016), the paper outlines the emergence of practices, tools and controversies related to the production of music based on voice cloning. Furthermore, it adopts notions such as 'AI in the wild' and 'outlaw innovation' (Soderberg 2016) to foreground the role of end-user appropriation practices in shaping patterns of innovation related to AI. The case of voice cloning and the focus on 'AI in the wild' allow us to highlight the role of appropriation processes and practices from below, thus contributing to the STS understanding of the transformations emerging around AI technologies.
Sergio Minniti (Politecnico di Torino)
Long abstract:
The presentation addresses issues related to how Generative AI (GenAI) systems and their users shape each other in the context of cultural production. It raises the question of how STS can provide a nuanced understanding of the socio-technical practices through which human and non-human actors establish relationships in situated contexts, focusing in particular on professional users in the cultural industries who have integrated GenAI systems into their work. In doing so, it adopts the perspective of the “co-construction” of users and technology (Oudshoorn & Pinch 2003) to highlight the different geographies of responsibility that emerge from the interactions between GenAI systems and their users, and shows how adopting an STS perspective on the user-technology relationship can help deconstruct simplistic interpretations of AI systems as neutral “tools” or, on the contrary, as heroic “autonomous agents”, which characterize widespread interpretations of the use of AI in art and cultural production. Drawing on insights from in-depth semi-structured interviews conducted with professionals within the Italian context, the presentation aims to shed light on the ways in which GenAI users can be configured by design, but also renegotiate their role by de-inscribing AI technology, developing anti-programs, and organizing movements of technological resistance. The findings reveal distinct patterns of co-construction, highlighting the need to problematize human-machine “collaboration” in the context of cultural production.
Martin Dolský (Charles University)
Long abstract:
Set against the evolving Czech higher education landscape, this paper explores the integration of Large Language Models (LLMs) into academia, examining the dynamic interplay between students, educators, and generative AI technologies. Employing a participatory, ethnographic approach—including interviews, participation in ethical codex discussions, and peer-to-peer workshops, this study weaves together a diverse range of perspectives and experiences from students as well as educators. By adopting a 'flat' ontology, it levels the playing field among all actors—students, educators, LLMs, the media, and institutional policies—thereby underscoring their collective impact on the resulting educational practices.
Challenging prevailing narratives about assessment integrity and plagiarism, I suggests these concerns indicate not a dystopian disruption of education or human cognition, but rather a renegotiation of existing power hierarchies within a flawed system of higher education. By amplifying student voices—often sidelined in mainstream media and academic discussions on the topic—the paper presents a nuanced understanding of LLM integration into learning in practice, steering away from both technological determinism as well as the temptation to downplay the significance of these technologies.
Highlighting the active engagement of students with LLMs, I endeavor to outline the nuanced, collaborative, 'cyborgian' practices, thus contributing to a richer dialogue on the interplay between technology and education. A varied, even contradictory, attitudes and approaches are revealed; ranging from avoidance, through curiosity, to pragmatic, strategic or even subversive uses, reflecting both hype and disappointment about the promises of technologies and education—but always with the ultimate goal of learning in sight.
Lara Dal Molin (The University of Edinburgh)
Long abstract:
This contribution suits the suggested topics of distributed agency and participatory methods. In this abstract, I describe an empirical and methodological effort, which I am currently developing as part of my PhD project at the University of Edinburgh, to re-imagine and re-design the rising discipline of prompt engineering in Artificial Intelligence (AI). In the context of Large Language Models (LLMs), prompt engineering refers to finding the most appropriate input – or prompt – to allow the model to solve a particular task (Liu et al., 2023, p.1; White et al., 2023). Due to the capability of LLMs to generate, under certain conditions, novel textual instances that may appear humanlike, prompt engineering is often sensationalised through popular narratives on omniscience and algorithmic fetishism (Luitse and Dankena, 2021). Particularly relevant to the performativity of LLMs are sociotechnical accounts of computers as “thinking machines”, associated with promises of efficiency, rationality and objectivity (Alexander, 1990, p.162; Natale and Ballatore, 2020). Alexander (1990) draws a parallel between computational technologies and sacred entities, suggesting the existence of imagined associations between sophistication and awesomeness. In my work, I attempt to subvert deterministic narratives on prompt engineering and text generation through running workshops on what I refer to as Participatory Prompting. This effort observes the Design Justice framework, suggesting that individuals and communities directly affected by the functionality of technological artefacts should form stances on technology design (Costanza-Chock, 2020). In these workshops, participants discuss relevant dimensions of their identity and co-design values-oriented prompts for an open-source LLM.
Mathilde Fichen (CNAM Paris)
Long abstract:
The Prolog programming language, conceived in 1972 at the University of Marseille, introduced the ‘logic-programming’ paradigm well-suited for symbolic artificial intelligence (AI) applications (Colmerauer 1993). In 1982, Prolog was selected as the main programming language for an ambitious Fifth Generation Computer Systems (FGCS) project led by the Japanese government which triggered the “First AI Arms Race” (Garvey 2020) in the USA and Europe. Despite its popularity in the 1980s, Prolog faded into obscurity in the 1990s, with actors' narrative linking its fall to that of the FGCS project (van Emden 2010).
To what degree can Prolog be considered a collateral victim of the FGCS project? How is the scientific destiny of languages determined by the institutional and industrial framework of the projects that rely on them?
We conducted a comprehensive analysis using both quantitative and qualitative approaches. We gathered metadata from hundreds of articles in the ACM Digital Library and cross-referenced this data with proceedings and field reports from the FGCS project, supplementing our findings with testimonies from researchers involved at the time.
Findings reveal a correlation between Prolog's publication volume and the perceived state of the FGCS project. Surprisingly, only a small fraction of Prolog publications were directly associated with fifth-generation computers; instead, Prolog found its stronghold in expert systems and database applications. Irrespective of its practical applications, Prolog became indelibly linked with the Japanese initiative.
Discussing the Prolog case aims to open a dialogue on the impact of unrealistic techno-scientific promises (Joly 2013) in AI on underlying technologies.
Mirko Schäfer (Utrecht University) Karin van Es
Long abstract:
The field of critical data & AI studies correctly questions the claim to objectivity, efficiency, and techno-solutionism made by the big tech sector and media discourses. However, two problems seem to be apparent here: a) the narrative is dominated by US American perspectives with societal institutions, governments and a technology sector widely different from the situation in the various EU countries; b) research often falls short in studying algorithms up close and within their socio-economic context of the organisations deploying them. The authors of this paper modelled their research practice consequently different. They immerse in public management organisations and media industries to study up close not only the discourses on AI but the actual practices of implementation, use and governance of AI systems. The researchers do not enter as mere observers but as experts in governing AI systems which allows them to also intervene and to take part in shaping the way algorithms are deployed in these organisations. They developed a strong track record of societal impact through informing policy, developing widely used tools for design, evaluation and assessment of algorithms, and creating learning formats for professionals. Drawing from STS and action research, this paper discusses methods for investigating and shaping the digital society. It discusses the benefits and pitfalls of this research practice, the privileged insights, the potential for societal impact, the learning opportunities for students and professionals, but also issues of complicity, dependence, and the changing role of the researcher and their academic host institution.
Elli Danae Vartziotis (National and Kapodistrian University of Athens) Aristotle Tympas (National and Kapodistrian University of Athens)
Long abstract:
Whatever AI may actually be, it is by now presented by many, including scientists and engineers, as relevant -if not the key- to addressing the environmental crisis. Building on ongoing research of an Athens STS research team on the rhetoric surrounding AI and, further, the way energy renewability may be actually defined, we propose to present a paper on the way the connection between AI and the environmental crisis is being portrayed in world-leading science journals, like Nature, Science, Scientific American, New Scientist. The research to be presented can usher in exposing the flaws in the current technological optimism (frequently: techno solutionism) by closely examining what counts as AI in this literature and how exactly it is supposed to save the environment. Moreover, from the other end, how the environmental crisis is framed in the context of being presented as something that can be addressed by AI.
Matthias Kloft (Goethe University Frankfurt)
Long abstract:
The proposed paper investigates from an ethnographic perspective, the development and use of artificial intelligence (AI) in the financial sector in the context of Germany, and the European Union more broadly. The study is framed by STS related questions pertaining to notions of enactment of AI and practices of prediction within regimes of anticipation. As a case study, the paper focuses on so-called robo-advisors, partly autonomous trading systems that take on the role of human portfolio managers and pursue quantitative investment strategies. While most decisions and market interactions have been automated, the human remains firmly 'in-the-loop'. Through these assemblages, of human and non-human actors, new forms of expertise emerge alongside more traditional economic knowledge, producing and engaging with new kinds of data to make and un-make markets. Concepts such as risk, responsibility, and accountability are re-negotiated and situated within new kinds of digital practices and infrastructures. The case of robo-advisors is of particular interest, as it intersects with a broader process of financialization and marketization by providing access to financial markets to private individuals or retail investors who seek to ensure retirements and pensions through Web applications or Smartphone apps. The proposed paper draws on work in progress and as such invites further discussion and comments on preliminary findings.
Nela Sljivljak (Johannes Kepler University Linz, Austria) Nicole Kronberger (Johannes Kepler University Linz)
Long abstract:
Research shows that people tend to be skeptical about using AI technology in medicine. However, our hypothesis “relationship trumps technology” implies that it is central to consider the quality of the physician-patient relationship in which the technology is embedded. The hypothesis states that if the relationship between physician and patient is good, the technology used, including AI, will be of secondary importance in medical diagnostics. More specifically, it is assumed that the quality of communication and a uniqueness neglect perceived by the patient will affect the patients’ trust in their physician.
The relationship trumps technology hypothesis was investigated in two online scenario-based 2x2 between-subject vignette studies (high vs. low-quality communication; use of AI vs. no use of AI). Study 1 was conducted in a US context (N=350), while study 2 involved Austrian participants (N=527).
In contrast to studies that find skepticism about AI, we find neither general opposition nor support for AI. Participants who gave their physicians higher communication scores also perceived less uniqueness neglect and showed more trust in their physicians. These results corroborate the “relationship trumps technology” hypothesis, indicating that the physician-patient relationship is essential and technology use is embedded in this relationship. Further, even if study participants voice fears and concerns about AI decision-support tools, they also acknowledge their benefits. Importantly, participants do not want to leave the decisions to AI technology alone, whereby the importance of the physician is confirmed.
Keywords: trust, artificial intelligence, perceived uniqueness neglect, physician-patient relationship, medical decision-making
Riccardo Pronzato (University of Bologna)
Long abstract:
Today eHealth interventions are often provided through digital platforms, i.e., unneutral, infrastructural elements, with specific socio-cultural norms, business goals and political relations embedded in their architecture (Schwennesen, 2019; Pronzato, 2023; Torenholt and Langstrup, 2023). The current expansion of AI/ML-based systems in healthcare is no exception in this regard, as partial and discriminative accounts of social life can be reproduced by these technologies (Crawford, 2021).
Recently, co-design has emerged as a widespread participatory method to produce eHealth technologies that can empower patients and caregivers (Dietrich et al., 2021). However, participation in technological development can be considered as a “matter of concern” (Andersen, 2015, c.f., Latour, 2004), and not considering “the micro-politics of the relations that are built-in co-design” (Huybrechts et al., 2020, p. 3) may risk reproducing rather than overcoming power asymmetries (Donia and Shaw, 2021).
Starting from the co-design of an e-learning platform for informal caregivers of patients with dementia (project AGE-IT, PNRR PE8 “Age-It”), this contribution bridges perspectives from STS, health sociology, critical algorithm studies and co-design. Specifically, drawing on Seaver’s (2017, c.f. Mol, 2002) conceptualization of algorithmic technologies as artifacts “culturally enacted by the practices people use to engage with them” (p. 5), it argues for the merits of investigating how co-designed AI/ML-based technologies are enacted by the practices and interpretations of different stakeholders, e.g., IT designers, caregivers, patients, doctors, etc.
In this scenario, a re-politicization of co-design emerges as essential to help respond to value conflicts and translate STS to more robust participatory practices.
Emily Wanderer (University of Pittsburgh)
Long abstract:
AI and machine learning have become key tools for ecology and conservation. As these tools are deployed alongside new recording and tracking devices they have turned previously inaccessible aspects of non-human animal life and behavior into data for conservation projects. Collectively, AI for conservation projects are intended to provide data and analysis that will enable interventions in ecosystems in order to improve them. Implicit in these initiatives is the idea of a better Anthropocene for nonhumans, one in which the human capacity to transform the world is used to improve, rather than degrade ecosystems. In this paper, I draw on fieldwork with several groups of scientists to examine the production of AI for wildlife, making use of STS and multispecies ethnography to attend closely to the role of actants beyond the human, looking at how wildlife, objects, and ideas are all drawn into networks of practice. This paper examines how big data and AI for ecology privilege particular aspects of animal life, but also how the actual animal continues to matter a great deal. Through fieldwork and focus on the situated lives and experiences of wild animals, this paper will attend to the kinds of animal subjectivity and experience that exceed what can be captured in datafied representations of animal lives and will consider the divergences and convergences between AI for wildlife and AI for humans.
Jo Pierson (Hasselt University VUB) Aphra Kerr (Maynooth University)
Long abstract:
There is a long history of industry and states attempting to frame and set expectations about future cities and the possible disruptive impact of smart systems and digitalisation. The recent narrative of ‘platform urbanism’ is based on the so-called ‘pivot to platforms’ (Barns, 2019), given the advance of platformisation, algorithms and Artificial Intelligence (AI) in the context of cities. In our paper we critically discuss and examine ‘the right to the smart city’ in the context of urban platformisation, i.e. how to go beyond tokenism by empowering citizens and how this can transform current market led imaginaries to smart cities into more equitable and sustainable cities (Mansell, 2012; Cardullo et al., 2019). For this we build on Feenberg’s critical theory of technology and the notion of ‘technical citizenship’ (2017), stressing the agency of citizens and how they can contribute to the construction and usage of data-driven AI platforms in a municipal context.
We illustrate this approach by comparing our experiences in two different countries applying two innovative participatory methods for offering citizens a voice: ‘walkshops’ and ‘citizens think-ins’. These relatively low technology methods have proved effective to make citizens and other municipal stakeholders more aware and better understand smart infrastructures in their cities, and offer a potential route to influence more effectively academic, corporate and city decision makers. We demonstrate how critical constructivism can practically contribute to understanding and confronting transformations and power asymmetries, emerging around data-driven AI systems in cities.