Log in to star items.
- Convenors:
-
Seweryn Rudnicki
(AGH University of Krakow)
Katarzyna Cieślak
Jan Waligórski (AGH University of Krakow)
Send message to Convenors
- Format:
- Traditional Open Panel
Short Abstract
The proliferation of generative AI in making knowledge about social realities is already underway and calls for sustained attention from STS scholars. This panel aims to discuss how AI-based technologies become adopted, appropriated, and accommodated in social research practices.
Description
The ways of representing and imaging ‘the social’ have long been a subject of inquiry within STS (Collins 1994; Osborne, Rose 1999; Latour 2000; Law 2009; Camic et al 2011; Afeltowicz, Pietrowicz 2013; Marres et al., 2018). Currently, the social research practices - both within academia and beyond - are experiencing significant transformations related to the widespread adoption of GenAI-based technologies. It has been claimed that: ‘advances in artificial intelligence (AI), particularly Large Language Models (LLMs), are dramatically affecting social science research’ (Grossman et al. 2023), and that ‘these tools may advance the scale, scope, and speed of social science research—and (...) enable new forms of scientific inquiry’ (Bail 2024: 121), or even ‘supplant human participants for data collection’ (Antosz et al. 2022). At the same time, critics have been pointing out dangers related to these developments, such as inherent ‘hallucinations’, the reproduction of bias, and the problematic belief that AI’s ‘explainability’ can be achieved through purely technical means (Argyle et al. 2023; Bassett, Roberts 2023; Hofmann et al. 2024). Regardless of bold claims and sceptical voices, the proliferation of generative AI in making knowledge about social realities is already underway—and it deserves ongoing and close attention from STS scholars. Hence, this panel invites contributions that address questions including (but not limited to): 1) What kinds of representations and imaginaries of ‘the social’ are produced using AI tools; 2) How AI-based technologies become adopted, appropriated, and accommodated in social research practices (both in academic and applied contexts); 3) What kinds of realities emerge, become strengthened, or are hindered through the use of AI in social research?
Accepted papers
Session 1Paper short abstract
We examine how genAI reshapes academic research promising efficiency yet also raising various risks (e.g., bias, hallucinations). Drawing from scenarios that extrapolate scientists’ today’s genAI practices into near-future research workflows we map future trade-offs and discuss implications.
Paper long abstract
Generative AI (genAI) is increasingly embedded in academic knowledge production. Many researchers expect gains in efficiency and research quality, yet genAI also raises ethical risks, including weak transparency, biased outputs, hallucinations, limited contextual understanding, and privacy concerns. With the political alignment of most Big Tech companies with the far-right US government under Donald Trump, ethical questions regarding the use of genAI for (impartial) knowledge generation are even more questionable.
To anticipate future frictions in the academic use of genAI for knowledge production, we conducted a first explorative workshop with eight scholars spanning different disciplines (computer science, sociology, psychology), cultural backgrounds (Germany, Russia, Brazil), and career stages (doctoral to professorial) in December 2024. Applying scenario writing (Kieslich et al., 2025), participants produced narratives that extrapolate today’s practices into near-future research workflows. Three themes stand out. First, genAI appears less as a remedy than as a symptom of an overburdened academic system: workload, funding competition, and publication pressure incentivize shortcuts and can foster mistrust. Second, scenarios highlight “efficiency at a cost”: synthetic data or AI-generated ‘participants’ may yield persuasive but ungrounded results and intensify misrepresentation of marginalized groups when model biases are reproduced. Third, participants raise value-laden trade-offs in which funding and peer review normalize uncritical AI use, disadvantaging researchers who resist.
At EASST conference, we plan to discuss the narratives in more detail, elaborate on future research perspectives based on our exploratory work and what, in our view, deserves particular attention when assessing the role of generative AI in academic research.
Paper short abstract
The presentation shows the results of a walkthrough-style analysis of 5 AI-based tools for qualitative data analysis, and argues that these tools reconfigure traditional criteria of qualitative analysis, often in non-transparent ways.
Paper long abstract
The presentation shows the results of the critical, walkthrough-style analysis of a diverse sample of AI-based tools for qualitative social data analysis, with a particular focus on their implications for representing the social world.
The development of artificial intelligence (AI), including large language models (LLMs), has often been heralded as profoundly impacting social research. These new technologies have been promoted as not only supporting such research activities but as extensions or even alternatives to traditional social research (Airoldi 2021; Grossman et al. 2023; Boag et al. 2024). However, it can also be argued that public debate remains dominated by simplified, extreme techno-phobic and techno-enthusiastic narratives that make it difficult to see the complex implications of adopting such tools in representing social realities (Dahlin 2022; Schinkel 2023).
The presentation shows the results of our analysis of 5 AI-based tools for qualitative data analysis using the walkthrough method drawing on the STS tradition (Light et al. 2018) and focusing on the applications’ intended use, audience, and embedded social meanings. In our presentation, we will focus on how these apps relate to four fundamental epistemological criteria of qualitative research: credibility, intersubjectivity, reflexivity, and ethics. In our interpretation, the discussed tools can hardly be treated as “innocent”. Instead, they should rather be seen as introducing new meanings and redefining traditional criteria of qualitative analysis, often in non-transparent ways.
Paper short abstract
Social realities of genAI are co-constructed by science governance actors. Based on results from an expert study on changes in qualification and implications for sovereignty, we reflect on the role the actors have for the co-construction of present and emerging social realities and their adoption.
Paper long abstract
The role of generative artificial intelligence (genAI) in science has been covered from the perspective of shifts in the system (Fecher et al. 2023), with regard to knowledge production (Messeri & Crocket 2024) as well as has been critiqued strongly (Kenney & Lincoln 2025). It shows, that social realities in research change profoundly while being co-constructed by different actors. This entails the representations it has for science governance actors for practices assuring adoption and accommodation of genAI.
This contribution presents findings from a qualitative study focusing on the role of genAI in research in general with regard to changes in qualifications and research sovereignty. Expert interviews were conducted with ten international science actors (including funding bodies, publishing entities, research associations) from April to June 2025. The research presented is part of the “Leibniz WissenschaftsCampus – Digital Transformation of Research“ (DiTraRe). Results show that science governance actors already deal with changing social realities in research. With regard to qualification, scientific knowledge creation gets modified by gen AI implying up- and deskilling processes alike. Further, in order to keep sovereignty in research, actors cope with specific risks of genAI, e.g. epistemic risks or risks of misuse and develop guidelines on genAI use.
Related to question 2 from the call it is argued that science governance actors co-construct present and future realities of genAI as they represent how change through gen AI is accommodated, how it may be adopted by scientists and lastly how adapted social realities for research are imagined and shaped.
Paper short abstract
Ecophora contests hegemonic AI imaginaries by reclaiming Narrative Sovereignty. Utilizing a chatbot filter grounded in ecosophy, we examine the "narrative footprint" of digital logic. Through sympoietic cocreation, we argue for democratized infrastructures to foster resilient, planetary futures.
Paper long abstract
The aim of this paper is to examine the "narrative footprint" of digital logic and its impact on social knowledge during the planetary crisis. The main point is that Artificial Intelligence (AI) serves as a primary site for the fabrication of power and the consolidation of hegemonic sociotechnical imaginaries. Drawing on Jasanoff’s (2015) framework of sociotechnical imaginaries and Beck et al.’s (2021) exploration of lived expectations, we analyze how AI architectures enforce epistemic injustice by marginalizing diverse ecological knowledge.
We explore Ecophora, a chatbot filter, as a multidisciplinary intervention to reclaim Narrative Sovereignty. Trained on an ecosophy framework incorporating indigenous knowledge and ecological linguistics, Ecophora explicitly contests asymmetric power dynamics. Central to this work are co-creation workshops and eco-awareness webinars, framed as sites of ontological struggle and reflexive modernity (Beck, 1992). By "making-with" (Haraway, 2016) marginalized groups, these interventions facilitate a sympoietic redesign of the narrative filter.
Through the pillars of Narrative Sovereignty and Radical Creativity, Ecophora transforms the AI interface into a shared resource for "pattern-breaking." We argue that resilient futures require a radical democratization of digital logic, ensuring that the infrastructures driving technology are rooted in the diverse ecological aspirations necessary for a sustainable "more than now."
Keywords: Sociotechnical Imaginaries, Narrative Sovereignty, Sympoiesis, Reflexive Modernity, Epistemic Justice.
Paper short abstract
The study examines how large language models (LLMs) simulate the impact of intergenerational trauma of political socialisation in the German context with a particular focus on whether such simulations are prone to memory-related bias.
Paper long abstract
The rise of large language models (LLMs) presents new methodological opportunities for social science research, particularly in simulating and predicting human behavior. Traditional social science methods often face challenges when applied to studying highly sensitive issues related to intergenerational trauma and memory, due to social desirability bias and difficulties with respondent recruitment. LLMs offer a possibility to address these constraints by simulating attitudes towards the past and the present societal issues for respondents with different demographic purposes. However, before such a possibility can be realistically realized, it must undergo thorough scrutiny, especially given the frequent lack of transparency in LLMs and the risk of bias. To contribute to such scrutiny, we investigate how LLMs perceive the impact of intergenerational trauma on the political attitudes of German citizens. Specifically, we simulate survey responses via a selection of LLMs regarding voting behavior and the perceived relevance of two memory- and identity-formative events: the Holocaust and German reunification. By systematically manipulating participant features, including demographic characteristics and modes of memory/trauma transmission, we explore the reasoning behind LLMs' perceptions of the effects of historical knowledge on human behavior and potential biases affecting such perceptions.
Paper short abstract
This paper examines how GenAI represents “the social” through displayed reasoning and how users interpret these accounts in practice. Focusing on everyday interactions and interviews, it shows how GenAI reasoning shapes and negotiates imaginaries of reality.
Paper long abstract
AI systems, especially LLMs, increasingly present forms of “displayed reasoning”, like "step-by-step explanations" or "full internal deliberation" in their in-depth thinking mode. Drawing on traditions in explainable GenAI and recent advances in chain-of-thought prompting (Wei et al., 2022), these features are often framed as tools that help users better understand and collaborate with intelligent systems. However, little work has examined how such reasoning displays participate in representation, or how they are engaged in practice.
This research is an empirical study of how users interpret GenAI’s displayed reasoning in their everyday use, and how such reasoning articulates particular accounts of social reality. The study adopts the Retrospective Think-Aloud method (Lazar et al., 2017). Interaction episodes involving human-oriented questions (e.g., which restaurant to choose) from ten users are documented over a two-week period. Participants use the GenAI as usual, without assigned tasks. They are then invited to take part in semi-structured interviews, reflecting on their interaction records. Particular attention is given to episodes in which GenAI reasoning invokes generalized others (e.g., “people,” “users”), treating these as moments where the social is explicitly articulated. Interviews further discuss how participants evaluate and compare these GenAI accounts with their own understandings.
Treating displayed reasoning as an interpretive resource, this paper aims to develop a framework of “reasoning-reaction” patterns. It shows how users interpret and negotiate GenAI accounts of preferences and decision-making logics. The study argues that GenAI systems participate in the co-production of social knowledge, shaping imaginaries of what counts as socially intelligible action.
Paper short abstract
We examine how LLMs become infrastructures for knowledge production through an ethnographic study of Italian AI startups. We identify two key practices enabling their diffusion: “sensorization” and “mythologization,” which shape how organizations render social reality knowable and actionable.
Paper long abstract
The rapid diffusion of Large Language Models (LLMs) is reshaping how “the social” is rendered knowable and actionable. In line with STS debates on the representation and enactment of social realities through technical devices, this paper examines how LLMs are locally implemented as infrastructures for knowledge production within firms.
Drawing on an 18-month ethnographic study of the Italian AI startup landscape, we investigate how LLM-based solutions are configured and made meaningful across diverse organizational settings. Our empirical material includes 18 in-depth interviews with startup managers, ethnographic observations at 10 business events and exhibitions, and a corpus of textual and visual materials collected throughout the research period.We identify two interrelated practices: sensorization and mythologization. Sensorization refers to the expansion of data infrastructures—through the installation of physical and digital sensors and the integration of heterogeneous data streams—that render organizational processes legible to LLMs. Mythologization captures the discursive and symbolic work – such as invoking Silicon Valley, exponential growth, or Moore’s Law – through which AI adoption is legitimized.
We argue that these practices actively participate in producing specific imaginaries of the social, privileging datafied and anticipatory forms of knowledge. In this sense, LLMs emerge as socio-technical arrangements that reconfigure what counts as “relevant” within organizations, while simultaneously aligning local actors with global narratives of AI innovation. The paper thus highlights the mutual constitution of AI imaginaries, organizational practices, and the epistemic transformations associated with the uptake of LLMs.
Paper short abstract
Our paper aims to deconstruct AI's "thingness" via an analysis of sociotechnical AI imaginaries in the German parliament and to reflect on the implications of including AI technologies as a subject in social research practices.
Paper long abstract
A key concern of a critical social research approach to "AI" is to question the "thingness" of AI and to 'make controversial' its status as a given, stable and agential entity (Suchman 2023). To carry out such critical undertaking and illuminate the deeply situated constructiveness of "AI", we need more studies that investigate how AI is envisioned, talked about, and used both in social research practices and within specific contexts. To contribute to these efforts, we explore how "AI" has been perceived and imagined by members of the German parliament in the past, and we reflect on the extent to which this perspective can be seen as a way of adopting, appropriating, and incorporating AI technologies into social research practices – as a subject rather than as a digital method.
Based on an examination of parliamentary speeches covering several decades, and drawing from the concept of "sociotechnical imaginaries" (Jasanoff & Kim 2015), we discuss implicit assumptions and values associated with "AI" by elected representatives of the Bundestag. Our STS study reveals how "AI" has been constituted, imagined, and strategically used in the Bundestag over time, thus deconstructing the view of "AI" as an ahistorical, fixed, and unchangeable entity, while at the same time revealing the contingency of its "thingness". Finally, we reflect on the kinds of realities that emerge from such a perspective and its implications for social research.
Paper short abstract
When AI outputs harden into “ground truth,” trust is instituted rather than felt. Using the material turn, I examine how datasets, standards, interfaces, and audit traces stabilize epistemic warrant. I argue “enough trust” requires structured contestability, provenance, and repair.
Paper long abstract
In a post-truth milieu, truth has not vanished; the justificatory basis of public judgement shifts from reasons to scalable evidential devices. AI outputs function as “ground truth” less because they are truer than because they embed in workflows as endpoints of inference (Bowker and Star 1999; Pasquale 2015). Hence “why trust AI?” is a normative question: when is dependence on AI epistemically justified. Reliance is instrumentally rational dependence; trust is its normative authorization, licensing an agent to shift epistemic risks to the system and its institutional carriers (O’Neill 2002; Lee and See 2004). Accordingly, “trust enough” is a threshold concept: under uncertainty, what makes the move from reliance to authorized reliance rational.
I offer conceptual analysis and a necessary-conditions argument. Where AI plays a ground-truth role, accuracy may justify reliance but cannot by itself warrant trust, because it neither secures examinability of evidential bases nor guarantees rebuttal and correction when failures occur (Burrell 2016; Pasquale 2015). Once inductive risk is acknowledged—high-stakes, asymmetric error—evidential thresholds become value-sensitive, so warranted trust must specify conditions for contestation and repair (Douglas 2000). I propose three necessary conditions: answerability (auditable provenance of data, labeling, scope, and failure modes), defeasibility (practical routes for challenge), and corrigibility (duties and mechanisms to revise data, models, and procedures under counterevidence). Only where these are secured does trust in AI attain epistemic warrant; otherwise “trust enough” collapses into efficiency-driven reliance or an authority effect that forecloses dispute.
Paper short abstract
What are the meanings and implications of AI research focusing on simulating human behaviors and interactions? This paper addresses this question by drawing on ethnographic fieldwork at an academic AI lab and examining the assumptions and practices of researchers involved in such projects.
Paper long abstract
Recent advances in AI development have been driven largely by progress in natural language processing, incentivizing researchers to use AI systems to model and simulate human behavior, preferences, and judgment as a way of demonstrating their abilities and practical usefulness. These efforts often give rise to broad claims about AI performance, invoking notions of reasoning, understanding, and agency. They also expose a familiar tension: the social phenomena being modeled are contested and context-dependent, yet evaluation practices rely on assumptions of ground truth and clear validation. This paper explores this tension by drawing on ethnographic fieldwork in an academic computer science lab in China that focused on legal AI and social simulation. The author examines how AI researchers employ social science data, concepts, and theories to build and evaluate AI systems, while also considering the broader institutional and technical contexts shaping their work. Focusing on the practices of AI researchers provides a starting point for asking how to understand their work at all. What does engagement with social science contribute to AI research projects? To what extent does this reflect collaboration across disciplines, and to what extent is social science engaged more instrumentally to legitimize claims about AI systems’ abilities? The paper also considers what follows from this: what kinds of understandings of social realities may be strengthened or sidelined through the use of AI systems, and how social scientists might respond to these processes.
Paper short abstract
How autoethnographic diaries can make visible what GenAI forecloses: findings from an action-research study involving 270 Italian university students reveal the realities strengthened and hindered by everyday AI use.
Paper long abstract
The ongoing hype around Artificial Intelligence continues to produce enchanted narratives about AI's capabilities and societal relevance (Campolo & Crawford, 2021), while critics point to hallucinations, bias reproduction, and the limits of technical explainability (Argyle et al., 2023; Bassett & Roberts, 2023). Within this contested landscape, understanding how individuals actually appropriate GenAI in situated practices remains an empirical and theoretical challenge for STS scholars. This contribution asks: what kinds of representations of 'the social' emerge through everyday GenAI use, and what realities are made visible—or foreclosed—when AI becomes embedded in knowledge-making practices?
We present a two-week guided autoethnographic diary deployed in an action-research study involving 270 Italian university students, developed in collaboration with the FLL at Utrecht University. Grounded in interpretative inquiry, critical theory and STS studies, the diary investigates AI situated practice from within everyday use, questioning its 'thingness' (Suchman, 2023) at the intersection of GenAI, technological solutionism (Macgilchrist et al., 2025), and power asymmetries in human-machine relations (Couldry & Mejias, 2019), making the academy a privileged site for observing how AI becomes accommodated within institutional knowledge-making.
Findings show how diary-based approaches elicit granular, situated accounts that aggregate data cannot capture (Di Fraia & Risi, 2017)—revealing opacity, contradictions, and affective dimensions of GenAI use, and making visible what remains hidden within black-boxed systems (Pangrazio & Selwyn, 2023). We argue that autoethnographic methods offer STS scholarship a productive empirical toolkit to investigate the social imaginaries co-produced through human-AI interaction, and the realities strengthened or hindered when GenAI enters knowledge production.
Paper short abstract
This contribution explores the realities and imaginaries emerging within a participatory effort to construct a gender-oriented fine-tuning dataset for Large Language Models.
Paper long abstract
In the field of Natural Language Processing (NLP), fine-tuning is the process of adapting an LLM to a desired behaviour, tone or task. As such, fine-tuning constitutes one of the primary pathways for LLM alignment, defined as steering the behaviour of a model towards certain human values or preference. In this context, the field of STS invites the question: whose values and preferences get routinely represented in LLM alignment, and whose get discarded? This contribution explores the construction of a Co-Designed Gender Instruction Tuning Dataset (CoDIGIT), reflecting the preferences and situated knowledges of a group of 84 participants. Through a guided walk-through of the dataset and selected fine-tuning procedures, this contribution explore the social realities emerging from the encounter between participants' positionalities and Meta's LLaMA 3.1 8B model. Following the 'instruction tuning' paradigm, the fine-tuning dataset consists of 105 prompt-response pairs written by participants. Participants' responses are short stories that capture their imaginaries of what an 'aligned' LLM should output in the context of gender, but also offer insight into what participants imagine as feasible LLM outputs. This rich encounter between humans and machines reveals the kinds of identities that participants perceive as plausible or possible while interacting with LLMs, offering a glimpse into participatory conceptualisations of technological 'repair' and into the realities that materialise and emerge through participatory social research in Artificial Intelligence (AI) and NLP.