Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Transparency in translation: an ethnographic study on diversity and bias in AI media systems  
Daniella Pauly Jensen (Maastricht University)

Send message to Author

Short abstract:

An ethnography into how data scientists at a media company approach diversity and bias in AI development. It explores challenges in representation and bias mitigation, and tensions between explainability and transparency. It highlights the complexities of creating ethical AI systems.

Long abstract:

This paper presents an ethnographic study of a large media company in the Netherlands, focusing on the development of AI systems for media. The study explores how data scientists conceptualize and operationalize “diversity” and “bias”, key aspects of explainability and transparency in AI. The research is based on interviews with and observations of data scientists at the company, as well as document analysis of papers in the field and company policy documents. The study allows for an exploration of sociotechnical aspects of AI development, examining how social factors are considered in the technical development of AI systems. I investigate the challenges data scientists face in ensuring that AI systems are representative of diverse user groups and in identifying and mitigating potential biases in their algorithms or data. Furthermore, I explore the tensions that arise between the goals of explainability and transparency in AI development, looking at trade-offs, challenges, and potential solutions. The paper contributes to understanding how practices of science communication shape sociotechnical imaginaries around AI, offering insights into the real-world complexities of developing responsible and ethical AI systems for media.

Traditional Open Panel P360
Sociotechnical dimensions of explainable and transparent artificial intelligence
  Session 2 Wednesday 17 July, 2024, -