Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Simon David Hirsbrunner
(University of Tübingen)
Lou Therese Brandner (University of Tübingen)
Send message to Convenors
- Format:
- Closed Panel
- Location:
- NU-2B11
- Sessions:
- Tuesday 16 July, -
Time zone: Europe/Amsterdam
Short Abstract:
AI's growing influence prompts concerns about opacity, unreliability and bias in algorithmic decision-making. To address this, we advocate for a socio-technical conceptualization of AI contestability and investigate its enactment through situated entanglements of tools, practices and norms.
Long Abstract:
AI-enabled technologies are becoming mainstream in many areas of society, the economy, and individual lives. This being so, scholars of various disciplines have identified multiple areas of concern regarding the reliability, fairness and accountability of AI-enabled decisions. On the one hand, these include instances of bias in AI data and algorithms, for instance when a lack of representativeness and diversity ultimately leads to the many facets of algorithmic discrimination. On the other hand, there is the issue of epistemic opacity in many AI-based processes of knowledge generation. The lack of transparency and insufficient access prevents users and stakeholders from understanding, evaluating and contesting decisions made by AI-powered systems.
Given these challenges, we advocate for transformations towards contestable AI in theory and practice. We understand contestability here as the given ability to contest decisions made by, or with the aid of algorithmic systems as well as the process leading there. A specific realization of contestability defines, accordingly, what can be contested, who can contest, who is accountable and what types of reviews are involved (Lyons 2021). Enabling contestation may include technical tools currently being developed in the area of AI audits and assessments. But it also involves broader socio-technical strategies that support multiple actors in criticizing, reflecting on and challenging AI mechanisms and decisions. Scholars informed by multiple theoretical approaches and scientific disciplines (Alfrink et al. 2022) have conceptualized and probed approaches for contesting predictions, mechanisms, and knowledge in socio-technical algorithmic systems and processes. In the future, such approaches will have to be embedded into concrete technologies, systems and cultures. In our panel, we bring together interdisciplinary contributions investigating the configuration and enactment of contestability in AI.
We take the opportunity to introduce an interesting publishing opportunity for the STS community in the form of a research topic at Frontiers:
https://fro.ntiers.in/contesting_AI
Accepted papers:
Session 1 Tuesday 16 July, 2024, -Paper short abstract:
Regulatory efforts emphasize the central role of transparency to enable contestability of AI-powered decisions. Our case-study on high-risk AI systems finds, in contrast, that contestability has to be characterized as a socio-technical quality that goes beyond technical openness and explainability.
Paper long abstract:
Emerging regulation like the EU AI Act highlights the importance of transparency and explainability of AI deployed in high-risk scenarios to enable contestation of AI-supported decisions. In practice, however, the combination of legal obligations and technical transparency measures may not be sufficient to enable realistic contestability (Kleemann/Hirsbrunner 2024).
Based on our participatory observation as ethical and legal scholars embedded in a large-scale research and development project charting methods for AI-powered police intelligence (i.e. facial recognition, speaker recognition, online communication analysis and object detection), we characterize the various challenges faced when aiming at more contestable AI systems in high-risk scenarios. The problems are, inter alia, linked to the common technical conceptualizations of AI model explainability (Rohlfing et al. 2021) that ignore the social and normative elements enacting contestability. In contrast, our contribution discusses contestability as a situated achievement by multiple actors and as entanglement of various techniques, practices and norms. We describe the different affordances and configurations of (in-)contestability involving four exemplary stakeholders being police investigators, police data analysts, data subjects and external AI supervision authorities as an exemplary and illustrating case-study. We then conceptualize the situatedness of AI contestation practices drawing on STS concepts and the interdisciplinary literature on contestability in technology development and use (Alfrink et al. 2022; Hirsbrunner et al. 2022; Lyons 2021). Finally, the paper also explores the question whether ELSI interventions (ethical, legal, social issues), co-laboration (Niewöhner 2015) or integrated research (Spindler et al. 2020) constitute appropriate research constellations to characterize and facilitate AI contestability.
Paper short abstract:
This talk argues for a social ontological notion of contestability drawn from Critical Theory that is based on a mutual recognition of “standard authority”, i.e. the internalized dispositions of practice participants to mutually demand and give justification to each other.
Paper long abstract:
If “contestability” with respect to AI systems is supposed to mean more than formal mechanisms of voice and exit, or even redress, we need a better understanding of the concept of contest itself. Drawing on Critical Theory, in this talk I will argue that contestability is not only about providing those affected by AI systems with the formal means to contest. Also, meaningful contestability is based on a mutual recognition of “standard authority” (Stahl 2013), i.e. the internalized dispositions of practice participants to mutually demand and give justification to each other. As practices of datafication and prediction involving AI systems have not yet developed around such a notion of standard authority, I will distinguish different forms of (social) critique (reflective, therapeutical, and expressive critique) that can be employed to establish the mutual recognition relations necessary for contestability. Such a social ontological concept of contestability offers a theoretical grounding for voice, exit, and redress that is neither rooted in a discourse ethical ideal of equal expression of opinion, nor in an ideal of informed consent, as both of them would be too demanding for individuals as participants of everyday practices involving AI systems. Rather, it is based on a much less demanding idea of a mutual recognition of performances-as-interpretation within practices of contestation.
Paper short abstract:
Industry guidelines and regulatory attempts do little to provide accessible and equitable means for contesting adverse effects of AI technologies. This contribution uses technological mediation and design theory to probe how the co-constituted roles of contestant and contested may be attended to.
Paper long abstract:
Contestation, in general, is an after-the-fact affair. Ideally, in transparent, or at least slowly progressing, processes of bureaucracy or legality there are steps, reference points, on which to build contestation—and often, appropriate or at least dedicated channels. However, the intricacies of an AI system are a different setup; and existing self-assigned industry guidelines or lagging regulatory attempts suggest that contestability is far from settled or achieved. From interface to off-shore silicate, the probabilistic models, sprawling software dependencies, multi-lingual architectures, and inscrutable databases create a haze of technological mediation. How can this be conceptually and practically attended to? Of course, the challenges of developing socio-technical systems which include AI technology components are well-known; and participatory, critical and decolonial design methods to address, challenge or even mitigate harm for people and planet are well established. But at runtime, and with the spectre of adaptive AI models shifting the parameters and feedback loops of systems significantly, where can a foothold be gained? In this paper, I suggest thinking contestation as a matter of technologically mediated co-constitution, in other words, a process in which who contests and what it contested are brought forth. This shows that, on the one hand, seemingly trivial other types of contestation are never not mediated; and on the other hand indicates what in particular sets contestation regarding AI systems apart: It is in layers of co-constitution, not exchanges between parties, that access points for contestation are made or disappeared.
Paper short abstract:
This contribution addresses user-centric contestability of AI-based job interviewing, an increasingly pervasive and often-criticized technology. The focus lies on how contestation can be facilitated for end users (both recruiters and job applicants) through a contestability by design approach.
Paper long abstract:
Artificial intelligence (AI) supported job interviewing presents itself as a new mainstream solution in the human resources (HR) industry. But the technology has been publicly criticized for a lack of accuracy and potentially producing biased results [1]. In light of such scrutiny and informed by work in two related research projects, this contribution addresses the question of how end users can be enabled to challenge the technology through a contestability by design approach [2].
Critically evaluating and, if necessary, contesting AI mechanisms, assumptions and predictions must be considered a socio-technical challenge involving heterogeneous actors, data, technologies, and infrastructures. Focusing in on users, AI interviewing tools have two types of end users with different needs, expectations, and concerns: 1) Job applicants whose data are analyzed and 2) HR professionals basing further decisions on the analyzed data. Both should be able to make informed choices, communicate issues and challenge outcomes regarding their interactions with an AI-based interviewing system, without this negatively impacting their careers or career prospects. To facilitate this, contestability must be built into AI systems already during their design and development. This contribution thus analyzes key considerations, touching on other concepts such as accountability and transparency from an AI ethics perspective, in enabling end users to reflect and meaningfully intervene in the context of AI interviewing systems.
[1]Wall, Shellmann. 2021. We tested AI interview tools. Here’s what we found. MIT Technology Review. https://www.technologyreview.com/2021/07/07/1027916/we-tested-ai-interview-tools/
[2]Alfrink et al. 2022. Contestable AI by design: Towards a framework. Minds & Machines. https://doi.org/10.1007/s11023-022-09611-z
Paper short abstract:
To enable meaningful contestation of public algorithmic systems, lessons can be learned from democracy, the Rule of Law, and system safety. These disciplines show the need for institutional and organizational cultures that reinforce feedback channels in design processes of public algorithmic systems
Paper long abstract:
Public organizations are increasingly confronted with and held responsible for harms emerging from their public algorithmic systems. Meanwhile, the design processes of these systems are in an institutional void, shifting design activities to subpolitical realms and impeding affected citizens to contest these systems. Currently, design processes are advanced by developing and implementing policy instruments that should support the ethical, legal, and technical scrutinization of the algorithmic systems being designed, for example, impact assessments and algorithm registers. Although these instruments do provide information flows needed for contestation, they represent an ad hoc and untargeted approach that does not guarantee meaningful contestation. Instead, contestation needs to be part of feedback channels that are institutionalized in the design process of public algorithmic systems. In this contribution, I explore and combine the insights on institutionalizing feedback channels from disciplines representing the nature of public algorithmic systems: democracy, the Rule of Law, and system safety. Democracy and the Rule of Law, structuring public administration contexts, consider citizens to be the central actor in contestation advanced in a system of checks and balances. System safety considers hazards in software-based systems as emergent properties that can only be controlled through learning in feedback channels. In that case, contestation is a starter for system quality improvements. An important similarity between these perspectives is their emphasis on specific institutional and organizational cultures that acknowledge, value, and support contestation. In this contribution, we will describe the conditions for such a culture in public algorithmic design processes.