to star items.

Accepted Paper

Formative evaluation and the calibration of trust in critical care AI: A sociotechnical analysis  
Stephanie Burns (The University of Edinburgh) Catherine Montgomery (University of Edinburgh)

Send message to Authors

Paper short abstract

Presenting work from the field of critical care, we argue that the calibration of trust should not be understood as a post-hoc evaluation of technology adoption but as an iterative sociotechnical process unfolding across the design, development, and anticipated use of AI systems.

Paper long abstract

Trust is consistently highlighted as a major component of successful technology implementation, adoption, and sustained use (Ontika et al., 2022), particularly in relation to AI decision support tools (AI DST). Much existing literature treats trust as an individual cognitive disposition to be measured after implementation. From an STS perspective, however, trust can be understood instead as a situated sociotechnical accomplishment, emerging through the alignment of practices, infrastructures, and institutional expectations. Departing from studies which look at trust post-implementation, in this paper we examine how trust in AI DST is configured during the development of such systems rather than retrospectively assessed once they are deployed.

Our research forms the sociotechnical analysis work-package of a clinician-led research programme, ICU-Heart: using data-driven approaches and routine data to detect Myocardial Infarction in critical care. Drawing on STS work on co-production, sociotechnical imaginaries, and infrastructures, we present empirical findings from documentary analysis, interviews, and focus groups with clinicians, data scientists, and data architects about user expectations and the calibration of trust in AI DST within critical care. Our data illuminate how participants articulate expectations about reliability, accountability, and clinical judgement when imagining the future use of AI-supported diagnosis in the intensive care unit. We argue that the calibration of trust should not be understood as a post-hoc evaluation of technology adoption but as an iterative sociotechnical process unfolding across the design, development, and anticipated use of AI systems.

Traditional Open Panel P284
Understanding the impact of decision-support AI technologies on medical practice: Learning from empirical studies.