Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Transparency Tinkering and Causality Quests: Dealing with opacities of medical AI  
Charlotte Högberg (Lund University)

Send message to Author

Short abstract:

Many researchers stress the need for transparency of medical artificial intelligence. Others argue that it cannot yet be made explainable enough in a clinical useful manner. This paper attends to views and practices amongst developers on how to make medical AI explainable, knowable and trustworthy.

Long abstract:

Within computer science and medicine, researchers stress the need for transparency and explainability of medical artificial intelligence (AI), for the sake of clinical usefulness and to safeguard patient outcome, accountability, non-bias treatment and reliability of scientific claims. Still, others argue that it is not yet possible to make medical AI explainable enough in a clinical useful manner, and that it represents a false hope. Furthermore, there are tensions between the push for advances in explaining “black boxes” and making interpretable systems from the start, as well as whether the trade-off between accuracy and interpretability is real. In addition, there are various ideas about what explainable AI actually is.

This paper aims to explores ideas and practices regarding AI transparency and explainability amongst those involved in developing AI for medical research and healthcare purposes, by drawing from interviews and observations with researchers and doctoral students. Based on a sociotechnical perspective and an analysis of epistemic cultures, this paper argues for the need to gain understanding of how AI developers deal with AI opacity, and work to make medical AI transparent, explainable, knowable and trustworthy. By this, we can make visible the role of medical AI, and technological choices, in knowledge production and quests for causality and improved patient outcomes. Moreover, this paper shows how AI explainability is approached as a boundary object facilitating the process of AI moving from the research stage into potential clinical implementation.

Traditional Open Panel P360
Sociotechnical dimensions of explainable and transparent artificial intelligence
  Session 2 Wednesday 17 July, 2024, -