Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

P360


Sociotechnical dimensions of explainable and transparent artificial intelligence 
Convenors:
Jason Pridmore (Erasmus University)
João Fernando Ferreira Gonçalves (Erasmus University Rotterdam)
Send message to Convenors
Format:
Traditional Open Panel
Location:
Agora 2, main building
Sessions:
Wednesday 17 July, -, -
Time zone: Europe/Amsterdam

Short Abstract:

This panel examines explainability and transparency as key mechanisms allowing the opening up of normative processes, practices and data within artificial intelligence (AI) and machine learning techniques. It will discuss the consequences of current approaches to explainable AI and transparency.

Long Abstract:

Developments associated with artificial intelligence (AI) and machine learning techniques have created a critical level of concern within STS and broader public interests. Discussions about the need to pause or stop of AI development by some corporate developers, academic researchers and government officials constitutes an unusual perspective on technology development given the prominence it has garnered. Underlying these concerns is the hidden processes and objectives of AI innovation. In contrast, concepts such as explainability and transparency are seen as key mechanisms to allow the opening up the underlying normative processes, practices and data within such technology development.

What are the consequences of current approaches to explainable AI? How are our meaning making processes in relation to AI altered by an increase of different forms of transparency? From a critical perspective, how are the calls for explainability and transparency used to shape broader understandings and support for AI development? This open panel welcomes contributions that seek to explore how practices of science communication, including dimensions of explainability and transparency, shape sociotechnical imaginaries around artificial intelligence and reflections on how STS approaches can contribute to transformations in these domains.

We welcome academic paper contributions on any of the following domains:

- Framings of explainability and/or transparency of AI;

- Science communication of AI and machine learning;

- Citizen constructions of AI and machine learning;

- AI development practices under a sociotechnical framework;

- Governance of AI;

- Tensions between explainability and transparency;

- Case studies surrounding AI and machine learning communication.

Accepted papers:

Session 1 Wednesday 17 July, 2024, -
Session 2 Wednesday 17 July, 2024, -