Log in to star items.
- Convenors:
-
olivier ocquidant
(Télécom Paris)
Sylvie Grosjean (University of Ottawa)
Rob Procter (Warwick University Alan Turing Institute for Data Science and AI)
Gérald Gaglio (University Côte d'Azur)
christian licoppe (Telecom Paris and I3 ( CNRS UMR 9217))
Send message to Convenors
- Format:
- Traditional Open Panel
Short Abstract
The panel focuses on how medical practice is reframed through the use of decision-support AI technologies. From an STS perspective, it highlights the need for detailed understandings of the technologies based on their practices, and for empirical studies of medical activity.
Description
Beyond the often alarmist narratives surrounding the impact of decision-support AI technologies on medical work, there remains a lack of precise knowledge about what concrete changes these tools bring to medical or clinical decision-making, and what adjustments and re-configurations they entail in physicians’ work.
Existing studies identified ethical and trust questions, epistemic tensions with trainers, and healthcare organizational reconfiguration concerns. Yet, the specific effects of decision-support AI tools on the real work of physicians (radiologists, anatomic pathologists, surgeons, dermatologists, psychiatrists, etc.) remain insufficiently explored. Pioneering research in radiology has shown, for instance, that computer-aided diagnostic tools often add to radiologists’ workload. Other studies suggest that AI may increase practitioners’ reflexivity about their decisions, while still others highlight how these tools reshape sensemaking practices in medical work. Despite their different angles, these works converge on a shared concern around adjustments and arrangements made by the practitioners when using these new tools.
This panel invites contributions covering diverse examples of medical decision-making practices that illuminate how physicians’ activities are rearranged and reframed when assisted by AI decision support tools:
• What kinds of changes occur in their work—procedurally or experientially?
• Which kinds of tasks – individual or organizational – are specifically reframed or induced by these new tools and how?
• What modes of use are observed, and how do these vary between different professional contexts?
• How is physicians’ trust in the performance of AI decision support tools calibrated and sustained in use?
We welcome empirical studies of AI in medical decision-making, including from STS and social science of medicine that apply ethnographic and micro-analytic approaches (e.g. ethnomethodology, video analysis, human factors, distributed cognition). Contributions from human–machine interaction perspectives are also encouraged.
Accepted papers
Paper short abstract
This research explores how AI-driven vocal biomarker analysis reframes medical practice in primary care teleconsultations. It investigates the "coupling" between physicians and AI to understand how these tools are integrated into diagnostic workflows.
Paper long abstract
This research explores how AI-driven vocal biomarker analysis reframes medical practice in primary care teleconsultations. It investigates the "coupling" between physicians and AI to understand how these tools are integrated into diagnostic workflows. Integrating AI vocal biomarkers offers a promising solution for early Parkinson’s disease screening in teleconsultations, particularly where physical examinations are limited. Using simulated teleconsultations where family physicians interact with an AI system, this study analyzes video recordings of teleconsultations and post-simulation interviews.
Preliminary results highlight a form of distributed clinical reasoning, where AI-generated vocal biomarkers trigger a reconfiguration of diagnostic attention. The study identifies four core dynamics within this human–AI coupling:
-Cross-validation: The AI reinforces the physician’s pre-existing clinical intuitions.
-Clinical arbitrage: When AI results conflict with medical judgment, the clinician performs mediation work to navigate the tension and avoid unnecessary testing.
-Attentional reframing: The tool facilitates the reinterpretation of specific clinical signs and shifts clinical focus toward new dimensions.
-Diagnostic co-evolution: Clinical reasoning matures through the interaction, as the system occasionally uncovers clinical elements that the patient had not previously mentioned.
Keywords: Medical AI, Clinical Reasoning, Teleconsultation, Human-AI Coupling, Parkinson’s Disease
Paper short abstract
We report some results from a longitudinal, qualitative study of the adoption of AI-based diagnostic support tools in histopathology. We focus, in particular, on issues these tools raise in regard to their accountability, transparency and trustworthiness in everyday diagnostic work.
Paper long abstract
In this paper we present the findings of a two year qualitative study of the work of histopathologists and their experiences of trialling an AI-based clinical decision support tool, which works by drawing the attention of the histopathologist to the presence of suspicious lesions in digitised tissue biopsies, in their everyday work in the detection and diagnosis of prostate cancer. We document changing work practices; the experiences with different AI-based diagnostic tools; the strengths; weaknesses and anomalies observed in terms of productivity, measurement and reassurance; and the extent to which the AI systems meet histopathologists’ expectations. We use the findings of this study to further examine the recursive relationship between human action and the wider organisational and system context. We are especially interested in some key issues regarding the impact of AI tools on the nature of diagnostic work, and how these foreground emerging issues concerning the nature of accountability, transparency and trust – interpersonal, organisational and trust in technology – that appear crucial to the successful adoption of this type of technological innovation within clinical settings.
Paper short abstract
We show how radiology educators and students practically make AI-based tools teachable, revealing how software interface structures learning and how radiomics pedagogy emerges through users’ situated, embodied work with the platform.
Paper long abstract
This paper examines how medical professionals, educators, and students practically engage with AI‑based tools in radiology, focusing on the sequential and embodied work through which such technologies are made accountable, intelligible, and teachable [1]. Drawing on EMCA‑informed video‑ethnographic studies of teaching sessions using the radiomics platform QuantImage [2], we show how participants orient to the software not only as a computational instrument but as an interactional partner whose interface and constraints must be navigated and incorporated into pedagogical work.
Radiomics – where quantitative features are extracted from imaging data and processed through machine‑learning models – poses persistent challenges of interpretability and trust [3]. While technical advances are substantial, little is known about how clinicians and trainees actually work with radiomic models in situ or learn to integrate them into medical workflows. Our study investigates how educators and students co‑construct learning environments around QuantImage, and how its design embodies a formalized version of the radiomics “workflow” that participants must learn to inhabit.
A central focus is the coherence between the “introductory” and “practical” parts of radiomics teaching. We show how instructors prospectively frame what will matter in the hands‑on session, and how participants retrospectively mobilize earlier explanations while working through the software’s stepwise procedures. These practices accomplish the recognizability of the activity as teaching and learning radiomics. We argue that learning AI‑based tools involves aligning with the praxeological analysis embedded in the software itself, respecifying AI as an ongoing accomplishment of human‑with‑machine practices.
[1] https://doi.org/10.3389/fcomm.2023.1234987
[2] https://doi.org/10.1186/s41747-023-00326-z
[3] https://doi.org/10.1002/med.21846
Paper short abstract
Presenting results of a year and a half of video ethnographic study, the communication will address some effects of using AI-based support-decision devices in radiology (mammography), showing that tensions arise in practice regarding epistemic distribution problems related to AI.
Paper long abstract
Our video-based ethnography of radiologists’ use of AI decision-support devices in breast cancer detection, sheds light on several integration issues arising in practice. We highlight three distinct types of use that illustrate different positions and tensions regarding these devices. These forms of use correspond to different epistemic distributions:
1. AI is “separated”, queued, and epistemically subordinated to the radiologist’s authority. Radiologists maintain the lead throughout the entire reading process, enabling AI advice within a confirmation/invalidation framework (thus eluding most of the potential ambiguity it may introduce).
2. AI is “intertwined” with radiologists’ reading through the enactment of two independent channels of interpretation. Radiologists temporarily eclipse their own authority and assess, at the end, the outcome of the confrontation between both readings.
3. AI is “merged” and closely incorporated into radiologists’ reading from the beginning of the process, the boundaries between the two authorities and reading modalities becoming blurred.
From these empirical observations, several effects of AI on radiological reading can be inferred. The practice becomes reframed and tensioned around what counts as its good and legitimate forms, thereby relocating the stakes and the object of radiologists’ authority:
• the assumption of the radiologist’s legitimate authority and ability to decide while avoiding doubt (1);
• the open interpretive deliberation as the core of the clinical process, which should unfold fully (2);
• the evaluation of AI advice as the central focus of the reading (3).
Paper short abstract
Intergration or conflict: interaction between AI and traditional Chinese medicine
Paper long abstract
Traditional Chinese Medicine (TCM) primarily relies on the four diagnostic methods—inspection, listening/smelling, inquiry, and palpation—with inspection and palpation showing the most significant divergence from modern scientific diagnostics. As AI technology advances, digital tongue and pulse diagnostic instruments have been introduced into clinical settings, particularly within the training programs of teaching hospitals in Taiwan. Despite policy incentives, a gap remains between technological integration and practical application. Many clinicians still prioritize "tactile intuition" and direct observation, while patients often favor traditional methods due to cultural expectations of TCM.
This study explores the dilemmas practitioners face in this modernization process. Key obstacles include the time-consuming nature of instrument operation compared to manual diagnosis and the public's perception of "hand-on-wrist" palpation as a symbol of trust and authenticity. By analyzing clinical experiences, this paper examines how practitioners navigate the trade-off between traditional expertise and scientific progress. It further discusses how TCM, as a discipline undergoing modernization, can reconcile AI advancements with clinical efficiency and patient trust, ultimately defining the role of technology in the future of holistic healing.
Key Words: Traditional Chinese Medicine, AI, Tongue/Pulse Diagnosis Device
Paper short abstract
Presenting work from the field of critical care, we argue that the calibration of trust should not be understood as a post-hoc evaluation of technology adoption but as an iterative sociotechnical process unfolding across the design, development, and anticipated use of AI systems.
Paper long abstract
Trust is consistently highlighted as a major component of successful technology implementation, adoption, and sustained use (Ontika et al., 2022), particularly in relation to AI decision support tools (AI DST). Much existing literature treats trust as an individual cognitive disposition to be measured after implementation. From an STS perspective, however, trust can be understood instead as a situated sociotechnical accomplishment, emerging through the alignment of practices, infrastructures, and institutional expectations. Departing from studies which look at trust post-implementation, in this paper we examine how trust in AI DST is configured during the development of such systems rather than retrospectively assessed once they are deployed.
Our research forms the sociotechnical analysis work-package of a clinician-led research programme, ICU-Heart: using data-driven approaches and routine data to detect Myocardial Infarction in critical care. Drawing on STS work on co-production, sociotechnical imaginaries, and infrastructures, we present empirical findings from documentary analysis, interviews, and focus groups with clinicians, data scientists, and data architects about user expectations and the calibration of trust in AI DST within critical care. Our data illuminate how participants articulate expectations about reliability, accountability, and clinical judgement when imagining the future use of AI-supported diagnosis in the intensive care unit. We argue that the calibration of trust should not be understood as a post-hoc evaluation of technology adoption but as an iterative sociotechnical process unfolding across the design, development, and anticipated use of AI systems.
Paper short abstract
This case study of a discontinued AI-based decision aid investigates how actors understood its purpose, role in consultations and development. It approaches failure as an analytic lens to examine how sociotechnical negotiations around expertise, organization and clinical work shape medical practice.
Paper long abstract
AI-based decision aids are frequently framed by developers, tech companies, and healthcare organizations to improve shared decision-making, reduce overtreatment, and enhance patient autonomy. Yet they often struggle to become meaningfully embedded in clinical practice. We conducted a case study of an AI-based decision aid developed in the Netherlands to support patients with prostate cancer by providing personalized information on treatment options and predicted side effects, which, despite high expectations, was ultimately discontinued.
Inspired by Latour’s Aramis, or the Love of Technology, we approach failure as an analytic lens for examining how entangled social, organizational, and technological factors sustain or unravel decision-support AI technologies. After completion of the project, we conducted semi-structured interviews (n=17) with clinicians, AI developers, health scientists, decision-aid company employees, and a patient representative involved. Through reflexive thematic analysis, we trace how actors understood the purpose of the decision aid, its place within clinical consultations, and their roles in its development and use.
Findings show that procedurally, clinicians were asked to integrate additional informational outputs into consultations, which prompted reflections about treatment decisions and care paths. Experientially, the introduction of the decision aid surfaced tensions around expertise, expectations, and boundaries of medical judgment. Organizationally, multi-stakeholder arrangements, regulatory changes, and dependencies on an external company introduced new coordination demands and economic considerations that shaped how the decision aid could be employed.
This study shifts attention to the sociotechnical relations, negotiations, and frictions through which decision-support AI technologies reframe medical practice – dynamics that often remain invisible in successful deployments.
Paper short abstract
As AI enters medical contexts, media discussions are polarized between risks and opportunities. However, real-life implementations of AI shape which potentials are realized. This study focuses on AI bias, examining how it is encountered, discussed, and addressed in the Swedish healthcare context.
Paper long abstract
Public discussions frequently frame medical AI applications as outperforming human expertise (Bunz & Braghieri, 2022). At the same time, critical scholars question the narrative of ‘AI for social good’ (Radhakrishnan, 2021) and warn about AI exacerbating existing problems of unjust healthcare access and outcomes. Both dystopian and utopian academic and media narratives stand in contrast to ‘epistemic modesty’ displayed by those who encounter the technology’s potential and limitations in their own practice (Samuel et al., 2021).
In this paper, I focus on AI bias in the medical context as a polarizing topic: some argue AI could revolutionize medicine and address existing inequalities, while others warn it could disastrously exacerbate existing biases. In the Nordic countries, proponents of rapid AI adoption suggest that Nordic welfare states with advanced digitalization and a strong focus on equality could take a leading role in developing AI systems that balance economic incentives and social justice.
Focusing on Swedish AI projects that have been implemented in medical practice, I examine how critical discussions of AI risk and AI bias are taken up, and whether theoretical discussions about marginalized bodies and promises of equality as a Nordic strength travel into practice. Interviewing medical and technical staff involved with implementing AI projects, I investigate to what extent and what types of biases are encountered, discussed, and addressed in their work, and how practitioners make sense of marginalized bodies and discrimination in medicine in relation to the implementation of AI tools.