Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

A Critical Examination of the Interpretability Paradigm  
Joseph O'Brien (University of California, San Diego) Nima Boscarino (UC San Diego)

Send message to Authors

Short abstract:

We critically examine current practices surrounding the use of interpretability methods in computer vision in an effort to determine whether differences exist between the explanations stakeholders seek from these methods and the actual outputs of interpretability methods.

Long abstract:

The computer vision community frequently employs interpretability methods in an attempt to provide explanations for model predictions and has converged on a common set of techniques over time. Implicit in their use is the assumption that current interpretability methods are epistemically grounded in a way that provides us with knowledge about how the model selects features and why a particular class is assigned to an image. Through successive generations of papers, the credibility of particular paradigms and perspectives on interpretability has been established, regardless of whether the discourse in the literature has been in favor or critical of the mainstream interpretability strategies. Examining these practices, we regard it as important to determine whether the explanations being sought by stakeholders and the outputs of these methods are incongruent. Here we examine our current paradigm of interpretability and suggest that critical reflection on its rich technical history could provide a steady foundation for interpretability practices. Latour’s (1987) specific discussion on the closing of black boxes as a process of fact-making offers a way to examine the minutiae of how specific methods used in AI research become black-boxed over time. Taking the research community’s reflections on the function of interpretability in the computer vision ecosystem, we investigate the foundations of this “cycle of affirmation” (Goldblum et al. 2023) and propose an alternative reframing of interpretability practices from a perspective of Latourian matters-of-concern (2004).

Traditional Open Panel P360
Sociotechnical dimensions of explainable and transparent artificial intelligence
  Session 1 Wednesday 17 July, 2024, -