Log in to star items.
- Convenors:
-
Seweryn Rudnicki
(AGH University of Krakow)
Katarzyna Cieślak
Jan Waligórski (AGH University of Krakow)
Send message to Convenors
- Format:
- Traditional Open Panel
Short Abstract
The proliferation of generative AI in making knowledge about social realities is already underway and calls for sustained attention from STS scholars. This panel aims to discuss how AI-based technologies become adopted, appropriated, and accommodated in social research practices.
Description
The ways of representing and imaging ‘the social’ have long been a subject of inquiry within STS (Collins 1994; Osborne, Rose 1999; Latour 2000; Law 2009; Camic et al 2011; Afeltowicz, Pietrowicz 2013; Marres et al., 2018). Currently, the social research practices - both within academia and beyond - are experiencing significant transformations related to the widespread adoption of GenAI-based technologies. It has been claimed that: ‘advances in artificial intelligence (AI), particularly Large Language Models (LLMs), are dramatically affecting social science research’ (Grossman et al. 2023), and that ‘these tools may advance the scale, scope, and speed of social science research—and (...) enable new forms of scientific inquiry’ (Bail 2024: 121), or even ‘supplant human participants for data collection’ (Antosz et al. 2022). At the same time, critics have been pointing out dangers related to these developments, such as inherent ‘hallucinations’, the reproduction of bias, and the problematic belief that AI’s ‘explainability’ can be achieved through purely technical means (Argyle et al. 2023; Bassett, Roberts 2023; Hofmann et al. 2024). Regardless of bold claims and sceptical voices, the proliferation of generative AI in making knowledge about social realities is already underway—and it deserves ongoing and close attention from STS scholars. Hence, this panel invites contributions that address questions including (but not limited to): 1) What kinds of representations and imaginaries of ‘the social’ are produced using AI tools; 2) How AI-based technologies become adopted, appropriated, and accommodated in social research practices (both in academic and applied contexts); 3) What kinds of realities emerge, become strengthened, or are hindered through the use of AI in social research?
Accepted papers
Session 1Paper short abstract
We examine how genAI reshapes academic research promising efficiency yet also raising various risks (e.g., bias, hallucinations). Drawing from scenarios that extrapolate scientists’ today’s genAI practices into near-future research workflows we map future trade-offs and discuss implications.
Paper long abstract
Generative AI (genAI) is increasingly embedded in academic knowledge production. Many researchers expect gains in efficiency and research quality, yet genAI also raises ethical risks, including weak transparency, biased outputs, hallucinations, limited contextual understanding, and privacy concerns. With the political alignment of most Big Tech companies with the far-right US government under Donald Trump, ethical questions regarding the use of genAI for (impartial) knowledge generation are even more questionable.
To anticipate future frictions in the academic use of genAI for knowledge production, we conducted a first explorative workshop with eight scholars spanning different disciplines (computer science, sociology, psychology), cultural backgrounds (Germany, Russia, Brazil), and career stages (doctoral to professorial) in December 2024. Applying scenario writing (Kieslich et al., 2025), participants produced narratives that extrapolate today’s practices into near-future research workflows. Three themes stand out. First, genAI appears less as a remedy than as a symptom of an overburdened academic system: workload, funding competition, and publication pressure incentivize shortcuts and can foster mistrust. Second, scenarios highlight “efficiency at a cost”: synthetic data or AI-generated ‘participants’ may yield persuasive but ungrounded results and intensify misrepresentation of marginalized groups when model biases are reproduced. Third, participants raise value-laden trade-offs in which funding and peer review normalize uncritical AI use, disadvantaging researchers who resist.
At EASST conference, we plan to discuss the narratives in more detail, elaborate on future research perspectives based on our exploratory work and what, in our view, deserves particular attention when assessing the role of generative AI in academic research.
Paper short abstract
The presentation shows the results of a walkthrough-style analysis of 5 AI-based tools for qualitative data analysis, and argues that these tools reconfigure traditional criteria of qualitative analysis, often in non-transparent ways.
Paper long abstract
The presentation shows the results of the critical, walkthrough-style analysis of a diverse sample of AI-based tools for qualitative social data analysis, with a particular focus on their implications for representing the social world.
The development of artificial intelligence (AI), including large language models (LLMs), has often been heralded as profoundly impacting social research. These new technologies have been promoted as not only supporting such research activities but as extensions or even alternatives to traditional social research (Airoldi 2021; Grossman et al. 2023; Boag et al. 2024). However, it can also be argued that public debate remains dominated by simplified, extreme techno-phobic and techno-enthusiastic narratives that make it difficult to see the complex implications of adopting such tools in representing social realities (Dahlin 2022; Schinkel 2023).
The presentation shows the results of our analysis of 5 AI-based tools for qualitative data analysis using the walkthrough method drawing on the STS tradition (Light et al. 2018) and focusing on the applications’ intended use, audience, and embedded social meanings. In our presentation, we will focus on how these apps relate to four fundamental epistemological criteria of qualitative research: credibility, intersubjectivity, reflexivity, and ethics. In our interpretation, the discussed tools can hardly be treated as “innocent”. Instead, they should rather be seen as introducing new meanings and redefining traditional criteria of qualitative analysis, often in non-transparent ways.
Paper short abstract
Social realities of genAI are co-constructed by science governance actors. Based on results from an expert study on changes in qualification and implications for sovereignty, we reflect on the role the actors have for the co-construction of present and emerging social realities and their adoption.
Paper long abstract
The role of generative artificial intelligence (genAI) in science has been covered from the perspective of shifts in the system (Fecher et al. 2023), with regard to knowledge production (Messeri & Crocket 2024) as well as has been critiqued strongly (Kenney & Lincoln 2025). It shows, that social realities in research change profoundly while being co-constructed by different actors. This entails the representations it has for science governance actors for practices assuring adoption and accommodation of genAI.
This contribution presents findings from a qualitative study focusing on the role of genAI in research in general with regard to changes in qualifications and research sovereignty. Expert interviews were conducted with ten international science actors (including funding bodies, publishing entities, research associations) from April to June 2025. The research presented is part of the “Leibniz WissenschaftsCampus – Digital Transformation of Research“ (DiTraRe). Results show that science governance actors already deal with changing social realities in research. With regard to qualification, scientific knowledge creation gets modified by gen AI implying up- and deskilling processes alike. Further, in order to keep sovereignty in research, actors cope with specific risks of genAI, e.g. epistemic risks or risks of misuse and develop guidelines on genAI use.
Related to question 2 from the call it is argued that science governance actors co-construct present and future realities of genAI as they represent how change through gen AI is accommodated, how it may be adopted by scientists and lastly how adapted social realities for research are imagined and shaped.
Paper short abstract
Ecophora contests hegemonic AI imaginaries by reclaiming Narrative Sovereignty. Utilizing a chatbot filter grounded in ecosophy, we examine the "narrative footprint" of digital logic. Through sympoietic cocreation, we argue for democratized infrastructures to foster resilient, planetary futures.
Paper long abstract
The aim of this paper is to examine the "narrative footprint" of digital logic and its impact on social knowledge during the planetary crisis. The main point is that Artificial Intelligence (AI) serves as a primary site for the fabrication of power and the consolidation of hegemonic sociotechnical imaginaries. Drawing on Jasanoff’s (2015) framework of sociotechnical imaginaries and Beck et al.’s (2021) exploration of lived expectations, we analyze how AI architectures enforce epistemic injustice by marginalizing diverse ecological knowledge.
We explore Ecophora, a chatbot filter, as a multidisciplinary intervention to reclaim Narrative Sovereignty. Trained on an ecosophy framework incorporating indigenous knowledge and ecological linguistics, Ecophora explicitly contests asymmetric power dynamics. Central to this work are co-creation workshops and eco-awareness webinars, framed as sites of ontological struggle and reflexive modernity (Beck, 1992). By "making-with" (Haraway, 2016) marginalized groups, these interventions facilitate a sympoietic redesign of the narrative filter.
Through the pillars of Narrative Sovereignty and Radical Creativity, Ecophora transforms the AI interface into a shared resource for "pattern-breaking." We argue that resilient futures require a radical democratization of digital logic, ensuring that the infrastructures driving technology are rooted in the diverse ecological aspirations necessary for a sustainable "more than now."
Keywords: Sociotechnical Imaginaries, Narrative Sovereignty, Sympoiesis, Reflexive Modernity, Epistemic Justice.
Paper short abstract
The study examines how large language models (LLMs) simulate the impact of intergenerational trauma of political socialisation in the German context with a particular focus on whether such simulations are prone to memory-related bias.
Paper long abstract
The rise of large language models (LLMs) presents new methodological opportunities for social science research, particularly in simulating and predicting human behavior. Traditional social science methods often face challenges when applied to studying highly sensitive issues related to intergenerational trauma and memory, due to social desirability bias and difficulties with respondent recruitment. LLMs offer a possibility to address these constraints by simulating attitudes towards the past and the present societal issues for respondents with different demographic purposes. However, before such a possibility can be realistically realized, it must undergo thorough scrutiny, especially given the frequent lack of transparency in LLMs and the risk of bias. To contribute to such scrutiny, we investigate how LLMs perceive the impact of intergenerational trauma on the political attitudes of German citizens. Specifically, we simulate survey responses via a selection of LLMs regarding voting behavior and the perceived relevance of two memory- and identity-formative events: the Holocaust and German reunification. By systematically manipulating participant features, including demographic characteristics and modes of memory/trauma transmission, we explore the reasoning behind LLMs' perceptions of the effects of historical knowledge on human behavior and potential biases affecting such perceptions.