Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Contribution:

Redefining explainability: a study on the development process of an explainable artificial intelligence in pathology  
Oceane Fiant (Université de technologie de Compiègne)

Send message to Author

Short abstract:

This presentation centers on an AI project in pathology, addressing AI opacity in a unique way. Rather than striving to make the models explainable, the focus is on meticulously building their training sets. From this case, I aim to draw broader lessons about explainability of AI systems.

Long abstract:

The engineer's approach to the issue of machine learning models’ opacity might involve opting for simpler models (like linear regression or decision trees), or adopting techniques from the explainable artificial intelligence field (such as Local Interpretable Model-Agnostic Explanations, Shapley Additive Explanations, among others). The latter option simplifies understanding the model’s decision-making, for instance by highlighting how specific features influence the model’s output.

In my talk, I will present a case where the challenge of opacity is addressed through careful training set construction, rather than model explainability. This project, a collaboration between a pathologist and an engineer, aims to create a dataset of breast cancer tumor images. This dataset will then be used to train convolutional neural networks to identify tumor components on whole slide images. The goal of my presentation is to review this project’s innovative solution and to derive broader insights regarding the issue of explainability of artificial intelligence systems.

Combined Format Open Panel P036
Questioning data annotation for AI: empirical studies
  Session 1 Friday 19 July, 2024, -