Log in to star items.
- Convenors:
-
Kara White
(Osaka University of Economics)
Raffaele Andrea Buono (Ca' Foscari University of Venice)
Send message to Convenors
- Discussant:
-
Philippe Sormani
(Zurich University of the Arts)
- Format:
- Combined Format Open Panel
Short Abstract
GenAI has taken the world by storm – and in its wake, STS and social researchers have been left dizzied, dismayed, and disheveled. But what if we could do genAI differently? This panel/workshop seeks contributions that engage WITH genAI – whether playful, mocking, or falling through the cracks.
Description
This combined format open panel seeks contributions that not only challenge how STS orients itself to the ongoing onslaught of generative AI as an ensemble of sociomaterial practices including code, algorithms, multivariate vectors, databases of text, GPUs and server farms, corporations and so, so much more, but that also challenge us to interact/code/perform/experience otherwise with these technologies. By combining traditional papers followed by a collaborative interactive workshop format, contributions should exploit and agitate genAI technologies.
What happens when genAI is reimagined as method, rather than as a tool to be applied to social science research, or as a kind of artificial research collaborator? Taking inspiration from experimental prototyping (Corsin Jimenez & Estalella 2017), and attention to “ethnographic projection” as a way to “game ethnography” (Farias & Criado 2023), we have a moment in which we can tinker with forms of genAI to rethink “toolmaking” (Chao et al 2024) as a critical technical practice (Agre 1997), not to merge or break apart the supposed separation between critique and technical engagements, but to design with and against genAI differently (or differentially? (Cf Munster 2025)). And yet, genAI is not a universalizing monolith (cf Lee & Ribes 2025; Sadowski 2025) and care must be taken (Ruckenstein & Trifuljesko 2023) to ground and particularize these practices and connections (e.g., Flore 2025).
Making and doing in STS has enabled alternative forms of knowledge-making and knowledge-expression. Can we intervene and invent with/through/aside genAI experimental prototypes besides generating bullshit (Hicks, Humphries, & Slater 2024)? Is there (de)generative potential in playfully designing or un-designing absurd solutions to non-existent problems? Traditional academic papers are welcomed as well as more speculative, artistic, performative, and irreverent contributions. What can we (de)generate together?
Accepted contributions
Session 1Short abstract
This methodographic story explores the co-composition of a book chapter with Large Language Models. By focusing on how visualisation choices emerge in distributed reasoning, it reframes generative AI as a partially tamed co-constituent, foregrounding epistemic, technological & ethical entanglements.
Long abstract
The rise of genAI invites a rethinking of how STS scholars study and account for distributed intelligences in contemporary research and writing practices. I critically examine the performative entanglements that arise when co-composing academic texts with Large Language Models (LLMs). Anchored in methodographic practice (Lippert&Mewes 2021) and a Baradian diffractive analytical sensibility, I reconstruct the making of a collaboratively co-composed book chapter, elaborating on how epistemic-onto-performative effects emerge in text creation when distributed across human-LLM interactions.
Focusing specifically on the negotiation of visualisation choices—aimed at illustrating analytical distinctions within the chapter—I explore how interpretive, methodological, and epistemic commitments manifest through interactions with GPT, including the co-analysis of Python code used to transform hand-drawn sketches into graphics. This inquiry reveals how reasoning is enacted as a co-constitutive and iterative process with ambiguously bordered agencies spanning human cognition, language model affordances, and platform infrastructures. Specifically, it attends to the generative tensions that arise when deploying LLMs simultaneously as epistemic tools, media-logical actors, and objects of reflective inquiry.
Ultimately, this methodographic vignette contributes to reframing generative AI from being either merely instrumental or destructive, towards becoming a partially tamed co-constitutive in a method assemblage. By centering on how reasoning itself is distributed—in terms of visualisation, language, and distinction-making—this work foregrounds the underexplored intersections of empirical philosophy, technological mediation, and responsibility. It invites STS scholars to consider not just how methods are made with genAI but also how these entangled practices are implicated in the production of situated knowledge, accountability, and socio-material worlding.
Short abstract
Prevalent benchmarks for cultural diversity in LLMs stabilize output resemblance with global value surveys as ground truth. We conduct an experiment to explore whether output-resemblance generalizes to alignment in practice and how users negotiate encounters with nominally misaligned models.
Long abstract
Prevalent benchmarks for cultural diversity in large language models (LLMs) assume that alignment between human and AI can be measured independently of the interaction between them. When a model can match the way culturally diverse humans answer questions on e.g. the World Values Survey, it is taken as an indicator for its capacity for cultural alignment.
However, it is not self-evident that such alignment in output leads to alignment in practice. One could thus reasonably hypothesize both that users interpret and negotiate what counts as value alignment in situ and that output resemblance with value aligned survey responses does not generalize easily to such situations. Thus, current alignment benchmarks of cultural diversity in generative AI likely ignores the sociotechnical contingencies of LLMs in use and prematurely reduces the problem of alignment to something that can be stably measured and easily corrected for.
As part of the Culturally Explainable AI (CXAI) project, funded by the Independent Research Fund Denmark, we develop a gamified experiment where participants are asked to evaluate outputs from models that are deliberately aligned and misaligned with their value positions according to current state-of-the art benchmarks. The aim is two-fold. Firstly, it explores the hypothesis that alignment measured by output-resemblance does not generalize to alignment in practice. Secondly, the experiment functions as prototype for what we envision as future elicitation device to qualitatively investigate the situated sense-making of LLMs in different contexts. We discuss the initial findings and experiences with using this experimental prototype to study GenAI-human relations.
Short abstract
We present “perspectival models” and explore the opportunities and challenges of modelling perspectives of preferences in public transportation use, in an attempt to promote public engagement in urban planning.
Long abstract
Can we enliven human perspectives on how cities are experienced using GenAI? How is it possible to create a tool that is both successful at conveying citizens' experiences and ideas, and useful for planning urban environments? We present a case study of the everyday public transportation environment in Winterthur, Switzerland, where users were asked to document their experiences via annotated photos. The study is driven by a methodological ambition to find new ways of giving voice to perspectives that are divergent and often excluded, simplified, or muted in bureaucratic processes. We do so by drawing on recent explorations of bringing perspectives into interaction with AI and the urban environment (Kozlowski & Evans, 2025; Nelson, 2021; Noyman et al., 2025), and introduce what we call "perspectival models", a term borrowed from Underwood (2019), built by combining fine-tuned GenAI and RAG algorithms on multimedia documentation of images, texts, voice notes, and metadata of time and space. By that, we prototype a tool for planning, repurposing digital traces of how the city is experienced by humans in new ways, thus proposing an alternative form of “Soft City Sensing” (Madsen, forthcoming; Madsen et al., 2022; Raban, 1974). Bringing our perspectival models into interaction, we are interested in uncovering common themes and disagreements of citizens’ experiences. We demonstrate these models during the panel and reflect upon in what ways they could succeed in involving citizens’ voices in urban planning. Are they able to convey citizens’ ideas? Or do they merely serve as an elicitation device?
Short abstract
This paper examines epistemic quality, ethical integrity and practical sovereignty as three dimensions for a responsible way of co-operating with AI systems in STS-related research and beyond.
Long abstract
STS-informed research has long engaged critically with the challenging dimensions of digital research methodologies while proposing constructive ways to navigate them (Rogers, 2013). This attentiveness to the performativity of methods is reflected in approaches advocating for inventive methods (Wakeford/Lury, 2014), multifarious instruments (Marres, 2017), and an embrace of methodological messiness (Law, 2004; 2006). At the same time, researchers across disciplines are increasingly experimenting with AI-based tools to investigate matters of concern in science and technology — from technology impact analysis and ethics research to computational social science (Hirsbrunner et al., 2022; Hirsbrunner, 2025).
The integration of generative and agentic AI systems into research practice introduces, however, a distinct set of methodological, ethical and practical challenges. Drawing on our own investigations experimenting with generative and agentic AI elements, we ask how epistemic quality, ethical integrity and practical sovereignty can be cultivated within such research constellations. By epistemic quality, we mean the scientific soundness of AI-assisted inquiry. To what extent must AI-generated outputs be reproducible and epistemically accountable? Which methodologies and insights prevail beyond the current AI hype? By ethical integrity, we engage with the normative challenges of operating alongside AI systems — acknowledging their role as vector of hegemonic knowledge production and their susceptibility to discriminatory bias. By sovereignty, we refer to researchers' capacity to retain meaningful agency over their methods, instruments, and research objects — encompassing technological dependency, contestability, and the interchangeability of sociotechnical elements. We argue that these three dimensions are mutually entangled and demand an integrated analytical approach.
Short abstract
This prototyping experiment asks what and how we can not-know by designing with (vibe coding with) genAI to create a useless ethnographic fieldnote generating chatbot, while arguing that vibe coding is an ethnographic practice in intervening with technicities materially as well as critically.
Long abstract
“Vibe Coding” emerged in early 2025, spawning not just reels, memes, and discussion, but its popularity has led to models being trained to do just that – generate code based on text prompts. Does this code actually compile? Sometimes, but often not (Danassis & Goel 2025; Fortes-Ferreira et al 2025). Taking cues from critical making (cf Bogers & Chiappini 2019), critically engaging materially for specific purposes, this experiment in prototyping an ethnographic chatbot asks how genAI can be appropriated otherwise – in the most absurd way possible.
My ongoing “StoryGen” project tinkers with genAI as a kind of “ethnographic projection” (Farias & Criado 2023), that, rather than looking at the collaborative epistemic environments (cf Felt 2022) to consider issues of expertise (cf Sarkar & Drosos 2025), this ethnographer uses “vibe coding” as ethnographic practice – not merely as a device to open up the ethnographic. Similarly, genAI is not imagined to be a collaborator, but as an absurd ensemble of digital, technological, and textual “things” that together move toward a kind of “gamification” of ethnographic practice, an exercise in “critical design” (Dunn 1997) that asks not what genAI can be useful for, but rather how tinkering with or designing with genAI can reframe our not-knowing (Wakkary et al 2015; Wakkary 2021). Ethnographic vibe coding is thus partly autoethnographic – and yet relies on text re-assembled through re-calculated weights in my fine-tuned model. The question shifts, then, from does it work (de Laet & Mol 2000) to what does it absorb?
Short abstract
In this presentation, I discuss how my research on synthetic voices has evolved from research-creation projects that directly activate generative AI models to collaborative speculative workshops with creative professionals, in a shift towards reimagining vocal futures with or without AI.
Long abstract
Since 2023, my research on synthetic voices has engaged in research-creation methods that critically approach, activate, and play with generative AI technologies. From 2023-2024, I created five research-creation projects that utilize AI systems to generate voice, sound, text, and images, and pivot between sense-making and nonsense-making through the prompting of absurdity. As of 2024, I no longer directly activate generative AI tools for ethical reasons. Instead, my research-creation work has shifted towards considering speculative AI models, including a series of workshops. Led from January to June 2026, these workshops are a continuation of my 2025 research-creation project hmm-aa-t, which explores the vocality of non-words and non-voice (Dolar 2006) through recordings of culturally specific non-verbal communication and resonance. Inspired by Anita Say Chan's Predatory Data (2025), these workshops imagine alternative improbable vocal futures, in the face of AI's generation of probability, through group interviews and vocal sound creation. They bring together 25 people who have a creative, research, or professional practice that negotiates voice (e.g. singers/composers, voice actors, sound artists, and others). Voicing, listening, and layering these collective recordings of non-verbal communication act as a catalyst for speculating the human and non-human voices and vocal bodies created by them. What can we understand about synthetic voice technologies by questioning, recreating, and intervening into them ourselves, outside of these systems? In this presentation, I trace the evolution of these research-creation methods in my work to consider what collaborative speculation offers in reimagining these technologies and our vocal futures with or without AI.