Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Opening the Black Box: Explainability and Accountability in Automated Software Systems  
Alka Menon (Yale University) Zahra Abba Omar (Yale University)

Send message to Authors

Short abstract:

This paper examines whether and how explanations for an opaque AI system affect stakeholders’ trust, drawing on the case of an educational health game for adolescents. It discusses how explainability, transparency, and accountability relate to one another in the design of AI systems.

Long abstract:

The 2022 White House Blueprint for an AI Bill of Rights outlines a right to “notice and an explanation” as one of five rights for consumers. These guidelines state that these automated systems–which are imagined to be transformative and cutting edge–must bridge the gap to ordinary consumers of explanations. This paper evaluates what stakeholders expect for an explanation for a “low-risk” machine learning application, an educational health game for children. What do people want to know about an AI software system in order to trust it? Is there a social consensus on what is needed to hold an AI system accountable? And what implications does this have for guidelines and regulations emerging in this arena? This study draws on semi-structured interviews with 28 stakeholders, including software designers, substantive experts, software users, and regulators, of an educational health game software and of a social media application. Our findings suggest that the majority of these stakeholders did not want in-depth, comprehensive explanations for AI software systems, instead citing worries about privacy. However, a subset of stakeholders (school staff), in contrast to most others, claimed responsibility for potential harms done by the AI system. These findings suggest that 1) operationalizing the right to an explanation may be challenging and 2) that pre-existing accountability frameworks and pathways might nevertheless provide oversight and delineate lines of responsibility for AI systems. This case, at the intersection of public health, education, and medicine, speaks to the potential reception and regulation of more mundane AI technologies.

Traditional Open Panel P360
Sociotechnical dimensions of explainable and transparent artificial intelligence
  Session 1 Wednesday 17 July, 2024, -