Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenor:
-
Tamara Gupper
(Goethe University, Frankfurt am Main)
Send message to Convenor
- Format:
- Panel
- Sessions:
- Monday 6 June, -
Time zone: Europe/London
Short Abstract:
This panel addresses the role of the people involved in and responsible for the development of AI and robotics, as well as their perception of the respective technology.
Long Abstract:
The idea of creating objects that autonomously imitate – or even surpass – human capabilities is not new. While for the past few millennia, these stories were the stuff of legends, the last few decades have seen technological advancements that have made some of these ideas materialise. The defeat of human world champions in highly mediatised ‘man vs. machine matches,’ such as the one in chess between Garry Kasparov and Deep Blue in 1997, or the one in the Asian board game ‘Go’ between Lee Sedol and AlphaGo in 2016, are probably some of the more spectacular examples of this. Systems that include AI components have, however, also found their way into many peoples’ homes and daily routines, including for example product recommendation systems, automatic translation software, or robot vacuum-cleaners.
So far, most research on AI and robotics from a social science perspective has focussed on either fictional representations of the technology, or the users’ perspective on it. But what about the people behind these technologies? Who are they, and how do they perceive their work, the technologies they (help) develop, and the impact these technologies might have in larger society? What processes are at the basis of the development of AI and robotics, and what social and material contexts are at play? This panel invites contributions from a wide range of disciplines which address the people involved in and responsible for the development of AI and robotics, as well as their perspective on the technology.
Accepted papers:
Session 1 Monday 6 June, 2022, -Paper short abstract:
AI systems require vast amounts of labour to develop and maintain, with data annotation playing a key role in these. This empirical study investigates practitioner perspectives and expectations regarding data annotation, promoting critical reflection around wider machine learning practices.
Paper long abstract:
A vast amount of human labour is required to develop and maintain AI models and systems, with data annotation playing a central role. However, this is often overlooked in the discourse around technological innovation and responsible AI. Furthermore, such work is conducted at the intersection of multiple professional groups who often have little visibility, such as gig economy workers. This labour is not just invisible to users of AI, its mechanisms can be partially obscured from the view of machine learning practitioners who use those data annotation services. Our research addresses this gap, mapping the points of contact which practitioners have with annotators, and their perceptions of annotators and annotation companies. Building on literature around crowdsourced data work, we challenge dominant narratives on automation by centering the invisible labour behind the functioning of many AI technologies and exploring methods for the creation of more participatory and democratic machine learning systems. Our empirical work investigates machine learning practitioner perspectives and expectations regarding data annotation work. Drawing from workshops conducted with machine learning practitioners, we explore the collaborative practices of data work, particularly the points of contact between data workers with different levels and types of expertise. We focus on experiences of data ‘wrangling’, or practices of data acquisition, labelling, and cleaning, as the point where researchers and engineers interface with domain experts, annotators, and other workers. In addition to contributing to understanding of the hidden labour involved in data wrangling, we aim to promote critical reflection around wider machine learning practices.
Paper short abstract:
This paper examines a critical step in the development of today’s AI systems based on machine learning: the annotation of training data by human experts. Focusing on AI in medical imaging in China, it explores how human expertise gets negotiated, transformed, and inscribed in annotation processes.
Paper long abstract:
This paper examines a critical step in the development of today’s AI systems based on machine learning: the annotation of training data by human experts. With an empirical focus on the application of AI in image-based medical diagnosis in China, the paper will unpack the often laborious yet invisible processes by which human medical expertise gets inscribed and transformed in the annotated medical data used to train AI algorithms. While radiologists specialize in interpreting medical images such as radiographs and writing reports, it has never been a standard, routine practice to label all exact lesions on an image as precisely as machine learning requires. Moreover, such work can oftentimes be contested among medical experts themselves and is extremely time-consuming, especially on a large scale. Drawing on 10 months of ethnographic fieldwork at two Chinese medical AI startups and extensive in-depth interviews with medical image annotators, I will analyze the modes and strategies for medical image annotation as well as the negotiations over credible expertise in the Chinese medical AI industry. In particular, I will highlight the emergence of a nascent profession referred to as “medical annotation specialists” in China, whose work represents a decentralization of expertise in the medical sphere. By opening up the “black box” of AI technology development and examining the human dynamics behind it, the paper will shed light on the co-production of medical AI algorithms and new social orders.
Paper short abstract:
This is an ethnographic account of disabled workers recruited by an NGO to annotate training data for smart speakers in China. Unpacking the annotation processes from the workers' perspectives, I argue that the social context of disability and disability expertise provide essential resources to AI.
Paper long abstract:
This paper examines an understudied labour force behind the production of artificial intelligence (AI) systems — people with disabilities. In recent years, people with disabilities in China have been explicitly enrolled by government programmes, corporations, and NGOs to classify and label training data for AI systems. This paper provides an ethnographic account of one of these programmes. Run by a disabled persons’ organisation, the programme is staffed with predominantly blind, low vision, and physically impaired data workers, tasked to sort data for an AI-based virtual assistant device (akin to Alexa).
Centring the perspectives of disabled data annotators, this paper unpacks the processes, and the material and social contexts of labelling training data for smart speakers in China. The inherent uncertainties entailed in classifying human intentions mediated by smart speakers without sociolinguistic contexts, I argue, demand a constant workforce of experienced annotators with trained tacit knowledge, rich institutional memory, and strong coordination with the AI developers. The quality of the data is therefore closely tied to the stability of the annotation workforce. Disabled workers in China, pushed out of a wide range of job opportunities due to structural ableism, supplied such stability for the AI company. Meanwhile, by enacting non-normative practises of access and time, the workers have reshaped their work conditions, consequently improving their work performance and experience. Complicating the debate on whether digital work empowers or exploits people with disabilities, this paper calls for greater attention to how disability may in turn shape the production of AI.
Paper short abstract:
In this talk, I use the notion of friction (Tsing 2005) to examine human data labour that keeps AI-based automation running. I will discuss an unconventional case of data labour: Finnish prisoners producing training data for a local artificial intelligence company.
Paper long abstract:
The Finnish data labour case highlights how the notion of friction aids in uncovering contradictory value aims and opening novel ways of exploring processes of automation. At first glance, prison data labour is ‘ghost work’ – now a recognized form of low paid click work. In light of friction, however, we are dealing with the local and situational variations of data labour: how high-tech development can be married with humane penal policies and rehabilitative aspirations.
The prison data labour case draws our attention to human involvements and imaginaries that are crucial in promoting automated futures. By doing so, the case demonstrates anticipations, collaborations and eventual disconnects, making it plain that the humans, with their guiding values, are the most critical component in human-machine arrangements.