Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Going beyond explainable AI: explaining machine learning practices in organizations  
Marisela Gutierrez Lopez (University of Bristol) Susan Halford (University of Bristol)

Send message to Authors

Short abstract:

Our empirical work de-centres ML models as objects of explanation to explore the wider network of practices that contribute to organizational decision-making. Inspired by the STS engaged programme, we explore how sociotechnical approaches can intervene Explainable AI and how explanations are done.

Long abstract:

The widespread use of Machine learning (ML) models for decision-making is recognised as a social and political challenge – centring on concerns about transparency and accountability – to which an increasingly popular solution is ‘Explainable AI’ (XAI). Here, explanation is understood as a primarily technical challenge, with the secondary challenges associated with communicating complex computational processes in ways that can be easily understood. This framing stands in contrast to STS research, which insists that technologies cannot be explained as stable objects but are situated with emergent and relational effects. How could an STS approach engage with the current drive to explanation? And what opportunities might this offer for critiquing and shaping XAI approaches?

We explore these questions through an ethnographic study, carried out in collaboration with a financial services company in the UK. In distinction to XAI, our work de-centres models as objects of explanation to explore the wider network of ‘machine learning practices’ that bring models into being, and use. This allows us to see how XAI is embedded in a wider ecology of multiple, situated, and intra-acting explanations that contribute to organizational decision-making. From this perspective, organizational complexity is what makes explanation such a challenge.

We argue that while XAI cannot deliver on its solutionist promise, it raises important questions about how decisions are made in sociotechnical assemblages. Inspired by the STS engaged programme of research, we explore how sociotechnical approaches can be mobilized to intervene how explanations are ‘done’ and reframe XAI mechanisms for transparency and accountability.

Traditional Open Panel P360
Sociotechnical dimensions of explainable and transparent artificial intelligence
  Session 2 Wednesday 17 July, 2024, -