Log in to star items.
- Convenor:
-
Denisa Kera
(Bar Ilan University)
Send message to Convenor
- Format:
- Workshop
Short Abstract
An interactive workshop exploring AI agents as performative mediators that give data, models, and infrastructures a voice in STS inquiries through participatory simulations and speculative design. We will use Krakow satellite data and let neighborhoods and places discuss environmental issues.
Description
This workshop https://nonhumansai.lovable.app/ invites participants to explore how AI agents can act as performative mediators between human and nonhuman worlds, giving voice to data, infrastructures, and environments. Using the open-source Satellite Personas platform developed by our Design & Policy Lab (https://github.com/anonette/satelite_personas
), we will transform Sentinel-2 satellite data from Krakow into AI agents representing different buildings and neighborhoods. These agents will stage a public theatrical conflict, each attempting to persuade the audience on environmental issues affecting their area. The public will then vote, turning deliberation into a participatory experiment in situated agency.
To amplify this performative dimension, we will use puppets, generated animation, but also test different linguistic typologies (ergative, nominative, and polysynthetic structures) and explore how grammatical alignment shapes notions of agency, relation, and responsibility. Participants will collectively design and release these AI agents, observing their interactions and rhetorical tactics.
Through this process, we will reflect on how linguistic typologies and data infrastructures co-determine who or what can appear as a speaking subject within socio-technical systems. The workshop draws on material semiotics, computational ethnography, and performative research, extending STS debates on representation and accountability into algorithmic and environmental domains.
While STS has long moved beyond human and posthuman discourses to trace heterogeneous networks of agency, our Satellite Personas turn this analytic gesture into a performative one, enabling nonhumans to articulate their own positions through data, grammar, and place. here is an early demo from Prague https://drive.google.com/file/d/1T7ZTEQthoUtkUDIjaSmp6txqiOK82qMf/view?usp=sharing
Practical requirements:
One 90-minute session, flexible seating, projector, Wi-Fi access, and participants’ laptops. Max. 25 participants.
Accepted contribution
Session 1Short abstract
This paper uses Star & Gerson's sociology of anomaly management to explain how "rationalist" forum LessWrong rally to investigate "glitch tokens"—disruptive and unexplainable AI outputs that defied explanation, through a combination of engineering, "media archaeology" and speculative myth-making.
Long abstract
When specific string tokens (such as "SolidGoldMagikarp" or " petertodd") were shown to trigger bizarre, evasive, or ominous behaviors in GPT-3, users of the "rationalist" web forum LessWrong faced an undocumented practical problem: classifying and investigating outputs that defied the model's advertised capabilities. This paper examines the community's situated reasoning practices, drawing on Star and Gerson's (1987) framework of anomalies as negotiated, highly situated interruptions to routine work—classified through debate as mistakes, artifacts, discoveries, or improprieties depending on organizational context and power relations.
Through a situational analysis of the "Glitch Tokens" thread (2023–2025), the analysis surfaces the community's practical epistemic labor, the ad-hoc methods and various forms of proficiencies deployed (Reddit "archaeology", repeated queries, etc.), and the competing interpretations stemming the community's distinct "social worlds"— ranging from "mechanistic interpretability" practitioners suspecting an embedding fluke to self-described "cyborgists", open to dialogue with the transcendent entities they perceived within the weights : the benevolent "Leilan" or the disruptive "petertodd". Crucially, when OpenAI silently patched these behaviors, members described this as "lobotomy," observing that the model now "writes scared." This anthropomorphic framing accomplishes important boundary work: by conflating the model's silenced interiority with their own stifled epistemic access, LessWrong members reclaim interpretive authority over a product whose access is contingent on the company's goodwill.
By attending to these heterogeneous methods of anomaly classification—whether grounded in credentialed expertise or speculative encounter—this paper underlines the crucial role of folk theories and exploratory practices upon the contested arenas of AI adoption.