Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Facets of (in-)contestability: the case of AI-powered police intelligence applications  
Simon David Hirsbrunner (University of Tübingen) Steven Kleemann (Berlin School of Economics and Law) Milan Nebyl Tahraoui (HWR BerlinFÖPSCMB)

Send message to Authors

Short abstract:

Regulatory efforts emphasize the central role of transparency to enable contestability of AI-powered decisions. Our case-study on high-risk AI systems finds, in contrast, that contestability has to be characterized as a socio-technical quality that goes beyond technical openness and explainability.

Long abstract:

Emerging regulation like the EU AI Act highlights the importance of transparency and explainability of AI deployed in high-risk scenarios to enable contestation of AI-supported decisions. In practice, however, the combination of legal obligations and technical transparency measures may not be sufficient to enable realistic contestability (Kleemann/Hirsbrunner 2024).

Based on our participatory observation as ethical and legal scholars embedded in a large-scale research and development project charting methods for AI-powered police intelligence (i.e. facial recognition, speaker recognition, online communication analysis and object detection), we characterize the various challenges faced when aiming at more contestable AI systems in high-risk scenarios. The problems are, inter alia, linked to the common technical conceptualizations of AI model explainability (Rohlfing et al. 2021) that ignore the social and normative elements enacting contestability. In contrast, our contribution discusses contestability as a situated achievement by multiple actors and as entanglement of various techniques, practices and norms. We describe the different affordances and configurations of (in-)contestability involving four exemplary stakeholders being police investigators, police data analysts, data subjects and external AI supervision authorities as an exemplary and illustrating case-study. We then conceptualize the situatedness of AI contestation practices drawing on STS concepts and the interdisciplinary literature on contestability in technology development and use (Alfrink et al. 2022; Hirsbrunner et al. 2022; Lyons 2021). Finally, the paper also explores the question whether ELSI interventions (ethical, legal, social issues), co-laboration (Niewöhner 2015) or integrated research (Spindler et al. 2020) constitute appropriate research constellations to characterize and facilitate AI contestability.

Closed Panel CP448
Enacting contestation of Artificial Intelligence (AI) – concepts, approaches and techniques
  Session 1 Tuesday 16 July, 2024, -