Log in to star items.
- Convenors:
-
Johannes Paßmann
(Ruhr University Bochum)
Ronja Trischler (Technische Universität Dortmund)
Send message to Convenors
- Chairs:
-
Johannes Paßmann
(Ruhr University Bochum)
Ronja Trischler (Technische Universität Dortmund)
- Discussant:
-
Cornelius Schubert
(TU Dortmund)
- Format:
- Traditional Open Panel
Short Abstract
This panel bridges pre-digital methodologies with post-digital challenges. We ask how qualitative inquiry can handle AI as an "informant" by re-grounding inquiry in core principles (symmetry, iteration, interpretation). We invite conceptual papers on this dialogue.
Description
The rise of Generative AI presents a fundamental challenge to pre-digital methodologies. Large Language Models are rapidly shifting from mere objects of study to active participants—or "non-human informants"—within the research process. This development challenges the human-centric foundations of our established methods: How do we conduct inquiry when our counterparts lack human intentionality yet produce eloquent statements?
This panel argues against framing this as an exceptional "digital" problem. Such exceptionalism creates a false binary, obscuring the deep methodological expertise our qualitative traditions in STS (and beyond) already possess.
We propose a "post-digital methodology" that explicitly re-grounds digital inquiry in pre-digital principles . By applying methodological "symmetry," we use a consistent analytical vocabulary for all actors. This symmetry does not equate human and machine; it foregrounds the unique agency and accountability of the human researcher in orchestrating, interpreting, and taking responsibility for the contributions of these opaque "informants". The "opacity" of AI is not new; qualitative methods have always been the premier tool for interpreting the "black boxes" of unreliable informants by analyzing their external practices.
We seek contributions that theorize the future of qualitative methods by placing them in dialogue with their pre-digital foundations. We invite papers that move beyond case studies to offer conceptual syntheses. We are interested in: (1) Applying core pre-digital principles (adequacy, iteration, crystallization/triangulation) to these new assemblages; (2) Reflecting on researcher accountability and agency when working with non-human informants; (3) Tracing the genealogies of post-digital methods back to established traditions (GTM, ethnomethodology, etc.). This panel will enhance our methodological "futures literacy" for a resilient STS.
Accepted papers
Paper short abstract
The reliance on Large Language Model(LLM) outputs as evidence by 'non-human informants' challenges pre-digital adjudicatory frameworks. To bridge this, I propose the Explainable Audit Trail(XAT), a post-AI reconceptualisation of the audit trail, built on empirical and interdisciplinary research.
Paper long abstract
Generative AI systems, such as Large Language Models (LLMs), are increasingly used in the justice system in England and Wales to process forensic audio and textual data, performing tasks such as transcription, translation, summarisation, and interpretation. Their outputs may serve as ‘expert evidence’ that judges and juries must evaluate, positioning LLMs as ‘non-human informants’. In this sense, adjudication resembles qualitative inquiry, relying on interpretation, triangulation, and assessments of adequacy, albeit within the safeguards of trial fairness.
LLMs introduce familiar challenges, like inaccuracy and opacity, yet traditional mechanisms for testing reliability, such as cross examination and summative reports are poorly suited to them.This underscores the need to reconceptualise how AI outputs are assessed for reliability in criminal adjudication.
This research draws on audit trails, a long-standing tool for documenting how information is created and interpreted across disciplines, including digital forensics and qualitative research. With roots dating back to the fifteenth century, audit trails provide a pre-digital foundation for evaluating ‘non-human informants’.
I propose the Explainable Audit Trail (XAT): a reconceptualisation of the traditional audit trail designed to enhance the reliability assessment of AI systems. Grounded in empirical analysis of digital forensic practice and interdisciplinary scholarship across law, human–data interaction, explainable AI, and scientific communication, XAT provides process transparency across the evidential lifecycle. It documents how LLM outputs are generated and interpreted, enabling courts to assess reliability in a structured way. Through XAT, I demonstrate how pre-digital methodologies can support post-digital evaluation of LLM outputs.
Paper short abstract
How can qualitative inquiry remain reflexive when working with algorithms, archives, and AI outputs? The Extended Digital Case Method treats non-human informants as generative analytics partners while centering researcher judgment to preserve contextual integrity when interpreting digital outputs.
Paper long abstract
How can qualitative inquiry remain reflexive when informants include non-human actors such as algorithms, archives, and AI outputs? This paper develops the Extended Digital Case Method (EDCM), a conceptual framework that positions digital actors as methodologically consequential “non-human informants” rather than substitutes for immersion. Building on Michael Burawoy’s extended case method, which emphasizes theory reconstruction through empirical anomalies, the EDCM adapts reflexive ethnography to the epistemic and methodological challenges of digitally-mediated fields. Platform governance, temporal persistence, and high-volume interactions complicate access, participation, and interpretation, while computational tools risk abstracting social relations from context.
The EDCM addresses these challenges by integrating digital tools–APIs, archives, and small language models–as extensions of reflexive practice. The researcher remains the accountable agent, using these tools to surface anomalies, extend temporal observation, and interrogate deviations while preserving interpretive fidelity to situated social practices. By combining sustained participant observation with selective digital augmentation, the EDCM operationalizes pre-digital principles in post-digital settings: adequacy for scope, contextual sensitivity for depth, and iterative engagement through repeated analysis for validity. Computational outputs function as generative informants whose contributions are interpreted and reintegrated through human judgment.
Illustrated through a digital ethnography of r/Antiwork, a 2.9M-member Reddit community focused on labor critique, the EDCM reveals minority perspectives, platform-mediated governance, and interactional divergences that challenge traditional labor and social movement theories. More broadly, it demonstrates how qualitative inquiry can maintain analytical rigor, accountability, and theory-generative potential as human and computational actors jointly shape social knowledge, situating post-digital methods within a lineage of reflexive ethnography.