Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Jason Pridmore
(Erasmus University)
João Fernando Ferreira Gonçalves (Erasmus University Rotterdam)
Send message to Convenors
- Format:
- Traditional Open Panel
- Location:
- Agora 2, main building
- Sessions:
- Wednesday 17 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
This panel examines explainability and transparency as key mechanisms allowing the opening up of normative processes, practices and data within artificial intelligence (AI) and machine learning techniques. It will discuss the consequences of current approaches to explainable AI and transparency.
Long Abstract:
Developments associated with artificial intelligence (AI) and machine learning techniques have created a critical level of concern within STS and broader public interests. Discussions about the need to pause or stop of AI development by some corporate developers, academic researchers and government officials constitutes an unusual perspective on technology development given the prominence it has garnered. Underlying these concerns is the hidden processes and objectives of AI innovation. In contrast, concepts such as explainability and transparency are seen as key mechanisms to allow the opening up the underlying normative processes, practices and data within such technology development.
What are the consequences of current approaches to explainable AI? How are our meaning making processes in relation to AI altered by an increase of different forms of transparency? From a critical perspective, how are the calls for explainability and transparency used to shape broader understandings and support for AI development? This open panel welcomes contributions that seek to explore how practices of science communication, including dimensions of explainability and transparency, shape sociotechnical imaginaries around artificial intelligence and reflections on how STS approaches can contribute to transformations in these domains.
We welcome academic paper contributions on any of the following domains:
- Framings of explainability and/or transparency of AI;
- Science communication of AI and machine learning;
- Citizen constructions of AI and machine learning;
- AI development practices under a sociotechnical framework;
- Governance of AI;
- Tensions between explainability and transparency;
- Case studies surrounding AI and machine learning communication.
Accepted papers:
Session 1 Wednesday 17 July, 2024, -Paper short abstract:
Despite rapid growth of deep learning uses, xAI research lacks an epistemological and political focus. Our study categorizes 12,000+ papers, revealing diversity of xAI methods. We propose a 3-dimensional typology to navigate technical, empirical, and ontological dimensions and empower regulators.
Paper long abstract:
Deep learning techniques have undergone a massive development since 2012, while a mathematical understanding of their learning processes still remains out of reach. The upcoming successes of systems based on such techniques in critical fields (medicine, military, public services) urges policy makers as well as economic actors to interpret the systems’ operations and provide means for accountability of their outcomes.
In 2016, DARPA’s Explainable AI program was followed by a sudden appearance of scientific publications on “Explainable AI” (xAI). With a majority of publications coming from computer science, this literature generally frames xAI as a technical problem rather than an epistemological and political one.
Exploring the tension between market strategies, institutional demand for explanation, and a lack of mathematical resolution, our presentation proposes to establish a critical typology of xAI techniques.
- We first systematically categorized 12,000+ papers in the xAI research field, then proceeded to a content analysis of a representatively diversified sample.
- As a first result, we show that xAI methods come considerably diversified. We summarize this diversity in a 3-dimensional typology: technical dimension (what kind of calculation is used?), empirical dimension (what is being looked at?) and ontological dimension (what makes the explanation right?) standpoints.
The heterogeneity of those techniques not only illustrates disciplinary specificities, but also points at the opportunistic methodologies developed by AI-practitioners to respond to this tension. The future of this work aims to identify the social conditions that generate the diversity of these techniques, and help regulators navigate through them.
Paper short abstract:
We critically examine current practices surrounding the use of interpretability methods in computer vision in an effort to determine whether differences exist between the explanations stakeholders seek from these methods and the actual outputs of interpretability methods.
Paper long abstract:
The computer vision community frequently employs interpretability methods in an attempt to provide explanations for model predictions and has converged on a common set of techniques over time. Implicit in their use is the assumption that current interpretability methods are epistemically grounded in a way that provides us with knowledge about how the model selects features and why a particular class is assigned to an image. Through successive generations of papers, the credibility of particular paradigms and perspectives on interpretability has been established, regardless of whether the discourse in the literature has been in favor or critical of the mainstream interpretability strategies. Examining these practices, we regard it as important to determine whether the explanations being sought by stakeholders and the outputs of these methods are incongruent. Here we examine our current paradigm of interpretability and suggest that critical reflection on its rich technical history could provide a steady foundation for interpretability practices. Latour’s (1987) specific discussion on the closing of black boxes as a process of fact-making offers a way to examine the minutiae of how specific methods used in AI research become black-boxed over time. Taking the research community’s reflections on the function of interpretability in the computer vision ecosystem, we investigate the foundations of this “cycle of affirmation” (Goldblum et al. 2023) and propose an alternative reframing of interpretability practices from a perspective of Latourian matters-of-concern (2004).
Paper short abstract:
The paper suggests that AI is not only about coding but also about the ability of data scientists to communicate their results within a social context. It develops a metric to measure interpretability in user/developer interaction in the development of a NLP project to support peace negotiations.
Paper long abstract:
This paper aims to create a social data science-based metric for measuring the interpretability of machine learning (ML). It suggests that ML development is not only about mathematics and coding but also about the ability of researchers to communicate ML activities within a social, technological, and organisational environment. Therefore, evaluating ML interpretability involves examining how ML concepts are negotiated in a social and relational setting, where participants make claims and consume the results produced by the algorithm. To embrace this idea, the proposal adapts the concept of 'algorithmic encounters' (Goffman, 1986) and applies it to a natural language processing project for an application to support peace negotiations (Arana-Catania, Lier & Procter et al., 2022). The analysis is based on transcripts from eight project meetings, with 72392 words in total, where participants discuss the feasibility of three alternative natural language processing models. So, for instance, when a user examines ML results and prompts reflections on how the NLP model helps them understand why certain words are associated with each other, it contributes positively to interpretability. Moreover, if the ML results match the panel's expectations, it contributes even more to interpretability. On the other hand, if a participant cannot comprehend the results and begins to ask questions about the algorithm's internal maths, metrics, or data, it reflects negatively on interpretability. The results demonstrate how this metric helps evaluate the interpretability of comparative ML models, overcome the interpretability/explainability dichotomy, and set new research pathways for data science research.
Paper short abstract:
AI has the potential to support older adults to remain living independently. Explainable AI (XAI) could improve the understanding of AI-based decisions. Although XAI seems simple, it is more difficult in care practices, and the meaning and needs for explainability differ among stakeholders.
Paper long abstract:
The number of older adults living independently at home is expanding, which is often said to bring the need for more technological assistance. Dutch policy aims to allow older adults to remain living at home as long as possible. In such policies, the use of technologies support older adults to perform daily practices. Artificial Intelligence (AI), as part of these technologies, has the potential to improve personalized care and ageing in place. The internal machineries of AI systems often remain hidden as a black-box. Interest in eXplainable AI (XAI) originate from this black-boxing. XAI should assist users in understanding the underlying logic of the decision-making process, and in identifying mistakes. It is unknown how various stakeholders understand AI, and what value do they see in XAI.
We conducted 21 scenario-based interviews to investigated XAI in care. We aimed to understand ‘what is XAI’ in the worlds of different stakeholders and the different enactments of XAI that become visible in their practices. Preliminary findings show that XAI sounds simple, but seems more difficult in practice. Stakeholders express different meanings and necessities of XAI. This varies from knowledge of algorithms or data specific knowledge towards practical understanding. In care of older adults, trust and willingness to use AI are essential. The needed level of explainability differs according to different stakeholders. As a follow-up, we recommend research into the enactment of XAI in practice, and the form or degree of XAI needed and for whom.
Paper short abstract:
This paper examines whether and how explanations for an opaque AI system affect stakeholders’ trust, drawing on the case of an educational health game for adolescents. It discusses how explainability, transparency, and accountability relate to one another in the design of AI systems.
Paper long abstract:
The 2022 White House Blueprint for an AI Bill of Rights outlines a right to “notice and an explanation” as one of five rights for consumers. These guidelines state that these automated systems–which are imagined to be transformative and cutting edge–must bridge the gap to ordinary consumers of explanations. This paper evaluates what stakeholders expect for an explanation for a “low-risk” machine learning application, an educational health game for children. What do people want to know about an AI software system in order to trust it? Is there a social consensus on what is needed to hold an AI system accountable? And what implications does this have for guidelines and regulations emerging in this arena? This study draws on semi-structured interviews with 28 stakeholders, including software designers, substantive experts, software users, and regulators, of an educational health game software and of a social media application. Our findings suggest that the majority of these stakeholders did not want in-depth, comprehensive explanations for AI software systems, instead citing worries about privacy. However, a subset of stakeholders (school staff), in contrast to most others, claimed responsibility for potential harms done by the AI system. These findings suggest that 1) operationalizing the right to an explanation may be challenging and 2) that pre-existing accountability frameworks and pathways might nevertheless provide oversight and delineate lines of responsibility for AI systems. This case, at the intersection of public health, education, and medicine, speaks to the potential reception and regulation of more mundane AI technologies.
Paper short abstract:
Many researchers stress the need for transparency of medical artificial intelligence. Others argue that it cannot yet be made explainable enough in a clinical useful manner. This paper attends to views and practices amongst developers on how to make medical AI explainable, knowable and trustworthy.
Paper long abstract:
Within computer science and medicine, researchers stress the need for transparency and explainability of medical artificial intelligence (AI), for the sake of clinical usefulness and to safeguard patient outcome, accountability, non-bias treatment and reliability of scientific claims. Still, others argue that it is not yet possible to make medical AI explainable enough in a clinical useful manner, and that it represents a false hope. Furthermore, there are tensions between the push for advances in explaining “black boxes” and making interpretable systems from the start, as well as whether the trade-off between accuracy and interpretability is real. In addition, there are various ideas about what explainable AI actually is.
This paper aims to explores ideas and practices regarding AI transparency and explainability amongst those involved in developing AI for medical research and healthcare purposes, by drawing from interviews and observations with researchers and doctoral students. Based on a sociotechnical perspective and an analysis of epistemic cultures, this paper argues for the need to gain understanding of how AI developers deal with AI opacity, and work to make medical AI transparent, explainable, knowable and trustworthy. By this, we can make visible the role of medical AI, and technological choices, in knowledge production and quests for causality and improved patient outcomes. Moreover, this paper shows how AI explainability is approached as a boundary object facilitating the process of AI moving from the research stage into potential clinical implementation.
Paper short abstract:
Our empirical work de-centres ML models as objects of explanation to explore the wider network of practices that contribute to organizational decision-making. Inspired by the STS engaged programme, we explore how sociotechnical approaches can intervene Explainable AI and how explanations are done.
Paper long abstract:
The widespread use of Machine learning (ML) models for decision-making is recognised as a social and political challenge – centring on concerns about transparency and accountability – to which an increasingly popular solution is ‘Explainable AI’ (XAI). Here, explanation is understood as a primarily technical challenge, with the secondary challenges associated with communicating complex computational processes in ways that can be easily understood. This framing stands in contrast to STS research, which insists that technologies cannot be explained as stable objects but are situated with emergent and relational effects. How could an STS approach engage with the current drive to explanation? And what opportunities might this offer for critiquing and shaping XAI approaches?
We explore these questions through an ethnographic study, carried out in collaboration with a financial services company in the UK. In distinction to XAI, our work de-centres models as objects of explanation to explore the wider network of ‘machine learning practices’ that bring models into being, and use. This allows us to see how XAI is embedded in a wider ecology of multiple, situated, and intra-acting explanations that contribute to organizational decision-making. From this perspective, organizational complexity is what makes explanation such a challenge.
We argue that while XAI cannot deliver on its solutionist promise, it raises important questions about how decisions are made in sociotechnical assemblages. Inspired by the STS engaged programme of research, we explore how sociotechnical approaches can be mobilized to intervene how explanations are ‘done’ and reframe XAI mechanisms for transparency and accountability.
Paper short abstract:
Attempts to make AI explainable can often be as opaque as the systems they are attempting to make transparent. This ethnographic work shows how technical solutions to opening up black box AI led to a lot of interpretive work, which itself was opaque due to the oft ignored social work involved.
Paper long abstract:
An often-stated reason for the low take up of artificial intelligence (AI) in healthcare is their lack of explainability and transparency. This is further complicated by the tension that the machine learning algorithms that exhibit the best predictive accuracy are also the most opaque. Embracing the concept of "sociotechnical intelligence," where social and technical elements are intricately linked, our ethnographic study explores the attempted creation of explainable AI within an interdisciplinary team developing an AI intervention for patients with multiple long-term conditions. We show how project members started to explore making their work explainable after interaction with the patient group associated with the project, who had said it was a priority. From here, it was thought that a technical solution could be found to the problem through the application of an explainability tool called SHapley Additive exPlanations (SHAP). However, in subsequent interdisciplinary meetings involving data scientists and clinicians, the results of the SHAP analysis underwent extensive debate and interpretations to bridge the gap between technical explanations and clinical relevance. Here, many of the results’ validity were questioned by the clinicians as they interpreted them as proxies for other features such as age. The results highlight the opacity inherent not only in AI, but also the technical solutions employed create transparency. This work extends the previous literature on explainable AI by showing how it is negotiated in practice and theorises explainability as socially negotiated and as opaque as the systems they are trying to reveal.
Paper short abstract:
The pervasive implementation of artificial intelligence (AI) and its societal implications puts new emphasis on evaluations of AI. With this study, we respond to that call by developing a framework for evaluative and reflexive development practices for trustworthy AI.
Paper long abstract:
The pervasive implementation of artificial intelligence (AI) and its potential impact on society has put new emphasis on evaluating AI systems, to ensure transparency, explainability, and overall trustworthiness.
Evaluations of AI, such as audits, tend to occur after implementation, allowing for an analysis of the complete assemblage but forgoing prevention of incidents. While preventative approaches rely on the development process and applying, for example, security-by-design (Sbd), privacy-by-design (Pbd), or applying the ethics guidelines for trustworthy AI.
Unfortunately, both approaches require significant effort, resources, and time. This, combined with the lack of unified definitions for design frameworks, affects the adoption of either.
For better evaluations, we aim to develop a framework to support reflexive development practices for trustworthy AI. We conduct exploratory interviews, literature research, framework development and testing.
Initial interviews show that the required effort, resources, and absence of incentives or legal requirements, are significant barriers for evaluating AI development. Moreover, evaluations tend to focus on specific metrics and requirements or are considered merely academic exercises.
Our previous research suggests AI developers perform practices – knowingly and unknowingly – that support the materialization of trustworthiness in both technical and non-technical understandings. Both our findings here and our insights from previous research highlight the need to remain close to current experiences and practices of AI developers to ensure the most effective adoption of evaluative development practices.
The framework developed in this paper will enable AI developers to communicate about their work reflections, increasing transparency of AI practices beyond AI systems.
Paper short abstract:
An ethnography into how data scientists at a media company approach diversity and bias in AI development. It explores challenges in representation and bias mitigation, and tensions between explainability and transparency. It highlights the complexities of creating ethical AI systems.
Paper long abstract:
This paper presents an ethnographic study of a large media company in the Netherlands, focusing on the development of AI systems for media. The study explores how data scientists conceptualize and operationalize “diversity” and “bias”, key aspects of explainability and transparency in AI. The research is based on interviews with and observations of data scientists at the company, as well as document analysis of papers in the field and company policy documents. The study allows for an exploration of sociotechnical aspects of AI development, examining how social factors are considered in the technical development of AI systems. I investigate the challenges data scientists face in ensuring that AI systems are representative of diverse user groups and in identifying and mitigating potential biases in their algorithms or data. Furthermore, I explore the tensions that arise between the goals of explainability and transparency in AI development, looking at trade-offs, challenges, and potential solutions. The paper contributes to understanding how practices of science communication shape sociotechnical imaginaries around AI, offering insights into the real-world complexities of developing responsible and ethical AI systems for media.