Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Negotiating Explainability in the Development of Sociotechnical Intelligence: An ethnographic study of the creation of AI for healthcare  
Duncan Reynolds (Queen Mary, University of London) Megan Clinch (QMUL) Deborah Swinglehurst (Queen Mary University of London)

Send message to Authors

Short abstract:

Attempts to make AI explainable can often be as opaque as the systems they are attempting to make transparent. This ethnographic work shows how technical solutions to opening up black box AI led to a lot of interpretive work, which itself was opaque due to the oft ignored social work involved.

Long abstract:

An often-stated reason for the low take up of artificial intelligence (AI) in healthcare is their lack of explainability and transparency. This is further complicated by the tension that the machine learning algorithms that exhibit the best predictive accuracy are also the most opaque. Embracing the concept of "sociotechnical intelligence," where social and technical elements are intricately linked, our ethnographic study explores the attempted creation of explainable AI within an interdisciplinary team developing an AI intervention for patients with multiple long-term conditions. We show how project members started to explore making their work explainable after interaction with the patient group associated with the project, who had said it was a priority. From here, it was thought that a technical solution could be found to the problem through the application of an explainability tool called SHapley Additive exPlanations (SHAP). However, in subsequent interdisciplinary meetings involving data scientists and clinicians, the results of the SHAP analysis underwent extensive debate and interpretations to bridge the gap between technical explanations and clinical relevance. Here, many of the results’ validity were questioned by the clinicians as they interpreted them as proxies for other features such as age. The results highlight the opacity inherent not only in AI, but also the technical solutions employed create transparency. This work extends the previous literature on explainable AI by showing how it is negotiated in practice and theorises explainability as socially negotiated and as opaque as the systems they are trying to reveal.

Traditional Open Panel P360
Sociotechnical dimensions of explainable and transparent artificial intelligence
  Session 2 Wednesday 17 July, 2024, -