Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Difference and control. Governance blind spot in the development of an AI-based medical imaging system  
Joaquin Yrivarren (Universidad Autónoma de Barcelona) Miquel Domènech (Universitat Autònoma de Barcelona)

Send message to Authors

Short abstract:

In describing evaluative practices around 'rare cases', by developers of an AI imaging system, we point to the 'blind spot' of two governance modalities regarding software-as-medical-device. There, non-controlled moral deliberations seeking to control lab processes' variability are deployed.

Long abstract:

We propose a description of the day-to-day work of the developers of a proprietary, AI-based imaging system applied to microbiology laboratory processes. We will focus on the practices of evaluating the different. Practices where developers must decide whether or not to accept the clients' desire to apply the imaging system to 'rare' cases. Cases resulting from a special combination of culture plates used in the laboratory and microorganisms to be detected. Identifying what is different triggers hesitation and discussion concerning one facet of AI governance: the development management in the industrial context. The empirical relevance of the practices we will describe, drawing upon a seven-month ethnography in the R&D area of a company dedicated to creating robotics and AI solutions for bacteriology laboratories, lies in the fact that they are hardly traceable practices. They happen in the 'blind spot' of two governance modalities. One is integrated into the company: the design control and risk management functions. Another belongs to consensus-based, industrial governance: standards guiding the development of software as a medical device. The former derives its legitimacy from the latter. The core idea we will discuss is that, by happening on that blind spot, the evaluation of the different is a moment of uncontrolled moral and aesthetic deliberation, where, paradoxically, controlling the variability of laboratory processes becomes a sine qua non condition of the efficacy/safety of AI and of the very possibility of optimizing those processes.

Traditional Open Panel P160
Entanglements of STS and bioethics: new approaches to the governance of artificial intelligence and robotics for health
  Session 1 Thursday 18 July, 2024, -