Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

The field of explainable AI: machines to explain machines?  
Nicolas Berkouk (CNIL) Mehdi Arfaoui (CNIL EHESS) Romain Pialat (CNIL)

Send message to Authors

Short abstract:

Despite rapid growth of deep learning uses, xAI research lacks an epistemological and political focus. Our study categorizes 12,000+ papers, revealing diversity of xAI methods. We propose a 3-dimensional typology to navigate technical, empirical, and ontological dimensions and empower regulators.

Long abstract:

Deep learning techniques have undergone a massive development since 2012, while a mathematical understanding of their learning processes still remains out of reach. The upcoming successes of systems based on such techniques in critical fields (medicine, military, public services) urges policy makers as well as economic actors to interpret the systems’ operations and provide means for accountability of their outcomes.

In 2016, DARPA’s Explainable AI program was followed by a sudden appearance of scientific publications on “Explainable AI” (xAI). With a majority of publications coming from computer science, this literature generally frames xAI as a technical problem rather than an epistemological and political one.

Exploring the tension between market strategies, institutional demand for explanation, and a lack of mathematical resolution, our presentation proposes to establish a critical typology of xAI techniques.

- We first systematically categorized 12,000+ papers in the xAI research field, then proceeded to a content analysis of a representatively diversified sample.

- As a first result, we show that xAI methods come considerably diversified. We summarize this diversity in a 3-dimensional typology: technical dimension (what kind of calculation is used?), empirical dimension (what is being looked at?) and ontological dimension (what makes the explanation right?) standpoints.

The heterogeneity of those techniques not only illustrates disciplinary specificities, but also points at the opportunistic methodologies developed by AI-practitioners to respond to this tension. The future of this work aims to identify the social conditions that generate the diversity of these techniques, and help regulators navigate through them.

Traditional Open Panel P360
Sociotechnical dimensions of explainable and transparent artificial intelligence
  Session 1 Wednesday 17 July, 2024, -