Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Algorithmic encounters: an interactional approach to AI interpretability  
Gian Marco Campagnolo (University of Edinburgh) Rob Procter (Warwick University Alan Turing Institute for Data Science and AI) Bernardo Villegas Moreno (University of Edinburgh) Felix-Anselm van Lier (University of Oxford)

Send message to Authors

Short abstract:

The paper suggests that AI is not only about coding but also about the ability of data scientists to communicate their results within a social context. It develops a metric to measure interpretability in user/developer interaction in the development of a NLP project to support peace negotiations.

Long abstract:

This paper aims to create a social data science-based metric for measuring the interpretability of machine learning (ML). It suggests that ML development is not only about mathematics and coding but also about the ability of researchers to communicate ML activities within a social, technological, and organisational environment. Therefore, evaluating ML interpretability involves examining how ML concepts are negotiated in a social and relational setting, where participants make claims and consume the results produced by the algorithm. To embrace this idea, the proposal adapts the concept of 'algorithmic encounters' (Goffman, 1986) and applies it to a natural language processing project for an application to support peace negotiations (Arana-Catania, Lier & Procter et al., 2022). The analysis is based on transcripts from eight project meetings, with 72392 words in total, where participants discuss the feasibility of three alternative natural language processing models. So, for instance, when a user examines ML results and prompts reflections on how the NLP model helps them understand why certain words are associated with each other, it contributes positively to interpretability. Moreover, if the ML results match the panel's expectations, it contributes even more to interpretability. On the other hand, if a participant cannot comprehend the results and begins to ask questions about the algorithm's internal maths, metrics, or data, it reflects negatively on interpretability. The results demonstrate how this metric helps evaluate the interpretability of comparative ML models, overcome the interpretability/explainability dichotomy, and set new research pathways for data science research.

Traditional Open Panel P360
Sociotechnical dimensions of explainable and transparent artificial intelligence
  Session 1 Wednesday 17 July, 2024, -