Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Contested epistemic obligations of technologically mediated medical decision making  
Christian Herzog (University of Lübeck)

Send message to Author

Short abstract:

This contribution elucidates how an epistemic injustice perspective gives rise to contestations of an often very generally proclaimed epistemic obligation to use medical AI. A readjustment beyond narrow notions of medical utility and towards epistemic inclusion appears necessary.

Long abstract:

Initial achievements and predicted progress in health-related decision support systems have given rise to quite general claims of an impending epistemic obligation for their utilization. Most of these claims result from indications that AI can reduce diagnostic errors and improve health outcomes.

Yet, works on the ethics and epistemology of explainable artificial intelligence (AI) have begun to contest such an obligation, arguing that AI’s potential epistemic opacity infringes on professional responsibility and obstructs shared decision-making—impairing health outcomes in effect. However, a recent health technology assessment report suggests that despite contributing to patient autonomy, shared decision-making apparently yields no statistically significant effects on, e.g., morbidity outcomes. Consequently, attempts to increase patient autonomy via explainable AI appear of secondary importance. However, by adopting an epistemic injustice perspective and inspired by feminist bioethics, we identify a possible obligation for medical research and technology development to epistemically include patient perspectives into designs, taking patients seriously as knowers already at an innovation’s inception.

The conflicting paradigms of maximizing health outcomes versus supporting epistemic justice give rise to at least two different approaches to medical AI: (i) Designing decision support systems with explainable and evaluative interfaces that allow for contesting an AI output during shared decision-making post-hoc, or (ii) attempting to epistemically include patient perspectives within development and the entire life-cycle of the medical AI systems.

Within this context, we ask how ethics and STS scholarship can investigate epistemic inclusion in medical AI and discuss its implications for healthcare governance.

Traditional Open Panel P160
Entanglements of STS and bioethics: new approaches to the governance of artificial intelligence and robotics for health
  Session 2 Thursday 18 July, 2024, -