Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

CP448


Enacting contestation of Artificial Intelligence (AI) – concepts, approaches and techniques 
Convenors:
Simon David Hirsbrunner (University of Tübingen)
Lou Therese Brandner (University of Tübingen)
Send message to Convenors
Format:
Closed Panel

Short Abstract:

AI's growing influence prompts concerns about opacity, unreliability and bias in algorithmic decision-making. To address this, we advocate for a socio-technical conceptualization of AI contestability and investigate its enactment through situated entanglements of tools, practices and norms.

Long Abstract:

AI-enabled technologies are becoming mainstream in many areas of society, the economy, and individual lives. This being so, scholars of various disciplines have identified multiple areas of concern regarding the reliability, fairness and accountability of AI-enabled decisions. On the one hand, these include instances of bias in AI data and algorithms, for instance when a lack of representativeness and diversity ultimately leads to the many facets of algorithmic discrimination (Barocas and Selbst 2016; Mehrabi et al. 2022). On the other hand, there is the issue of epistemic opacity in many AI-based processes of knowledge generation (Gunning et al. 2019; Rohlfing et al. 2021). The lack of transparency and insufficient access prevents users and stakeholders from understanding, evaluating and contesting decisions made by AI-powered systems.

Given these challenges, we advocate for transformations towards contestable AI in theory and practice. We understand contestability here as the given ability to contest decisions made by, or with the aid of algorithmic systems as well as the process leading there. A specific realization of contestability defines, accordingly, what can be contested, who can contest, who is accountable and what types of reviews are involved (Lyons 2021). Enabling contestation may include technical tools currently being developed in the area of AI audits and assessments. But it also involves broader socio-technical strategies that support multiple actors in criticizing, reflecting on and challenging AI mechanisms and decisions. Scholars informed by multiple theoretical approaches and scientific disciplines (Baumer et al. 2015; Hirsbrunner et al. 2022; Alfrink et al. 2022) have conceptualized and probed approaches for contesting predictions, mechanisms, and knowledge in socio-technical algorithmic systems and processes. In the future, such approaches will have to be embedded into concrete technologies, systems and cultures. In our panel, we bring together interdisciplinary contributions investigating the configuration and enactment of contestability in AI.

Accepted papers:

Session 1