Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Roger Andre Søraa
(NTNU)
Yana Boeva (University of Stuttgart)
Hendrik Heuer (Center for Advanced Internet Studies (CAIS) and University of Wuppertal)
Milagros Miceli (Weizenbaum Institute)
Send message to Convenors
- Chairs:
-
Hendrik Heuer
(Center for Advanced Internet Studies (CAIS) and University of Wuppertal)
Yana Boeva (University of Stuttgart)
- Format:
- Traditional Open Panel
- Location:
- NU-6A25
- Sessions:
- Tuesday 16 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
This panel engages in discussions of the sociotechnical transformations that AI brings to working life, creating new digital paradigms and epistemic cultures of “ghost work.” We aim to critically examine the obscured labor and the profound implications of advancements in AI.
Long Abstract:
The advent of AI not only transforms working paradigms but also reframes epistemic cultures and the very nature of labor. This panel dives into these profound transformations to critically scrutinize how different types of AI are impacting labor sectors and the multifaceted implications accompanying AI advancements. A central topic we seek to explore revolves around the concept of "ghost work" (Gray & Suri 2019)—where work is seemingly done invisibly by technology, but have indeed humans present but hidden in the machines. Therefore, we ask: To whom is this work truly ghostly? What about human agency in machine worlds? What historical entanglements of "ghost work" with colonial legacies and labor exploitation can be found? What constitutes fair structuring of societal organization when AI systems thrive on obscured labor?
By elevating voices and perspectives of so-called ghost workers, we aim to demystify the values inherent in such roles. While the term "ghost work" often carries problematic relations, there lie potential tangible benefits within these roles. The challenge lies in retaining the merits of such work while addressing inherent issues. As we enter/approach a new era where generative AI platforms like ChatGPT render every user a potential contributor of training data, we must also craft a vocabulary to articulate the emergent forms of ghost work.Moreover, there's a pressing need to spotlight "unwitting ghost work"—instances where individuals' creations are unwittingly harnessed to train foundation models.
How can the STS community investigate those whose labor is inadvertently obscured, erases, or co-opted? Drawing inspiration from critical ethnographic scholarship on AI and digital platforms, we recognize that human engagement with automated systems persists in concealed forms. This panel invites submissions that unearth instances of human labor in AI transformations. We aspire to chart a comprehensive map of "ghost work" and its kin.
Accepted papers:
Session 1 Tuesday 16 July, 2024, -Short abstract:
This presentation explores the impact of AI on worker hiring and recruitment processes, focusing on the sociotechnical changes that AI brings. It emphasizes AI's role in contemporary recruitment and discusses "HR ghost work," and its ethical implications, especially for hiring practices.
Long abstract:
This presentation explores the realm of ghost work within the context of worker hiring processes and HR, where artificial intelligence (AI) is increasingly used to make recommendations and potentially decisions regarding job recruitment and hiring. The primary objective is to shed light on the profound sociotechnical transformations that AI brings to bear on contemporary recruitment practices. I look at the interplay between human decision-making and algorithmic systems in the recruitment landscape, emphasizing that even when AI systems take the spotlight in recruitment, humans remain as "ghosts-in-the-machine." The project provides a comprehensive examination of how algorithmic technologies, driven by artificial intelligence and machine learning, have radically transformed the recruitment domain. I explore how these technological advancements have given rise to 'AI ghost work in hiring processes,' a phenomenon where hidden human labor and prevalent machine intelligence converge to streamline and enhance the hiring process, albeit at times obscuring and concealing human tacit knowledge and expertise. My analysis reveals the multifaceted nature of ghost work, which significantly impacts the recruitment landscape. I look at the ethical implications of these sociotechnical transformations, including questions surrounding fairness, bias, and transparency in algorithmic decision-making. My focus centers on how Human Resource Management staff (HRM) are affected by AI utilization in this field, offering insights into the evolving dynamics of worker hiring processes, emphasizing the pivotal role played by sociotechnical transformations where AI intersects with HR practices.
Short abstract:
This paper documents the use of digital interfaces, architecture, and onsite branding to downplay the presence of human labour and reinforce specific, lucrative sociotechnical imaginaries within the context of streetside retail automation in North American urban centres.
Long abstract:
This paper documents the use of interfaces, architecture, and branding to present human service sector work as automated processes within the context of streetside retail automation in North America. In their 2019 book, "Surrogate Humanity," Atanasoski and Vora speak to the trend of rendering workers invisible on TaskRabbit and other micro-work platforms. Embodied labour such as picking up groceries and cleaning homes is rendered abstract by practices of anonymization and asynchronous scheduling embedded into the interfaces of these apps, enabling "the fantasy that technology is performing the labour." Reiterating the ethos of ghost work, the authors outline that the innovation of these platforms is not algorithms or AI: "the innovation is the interface." Departing from the online platforms characteristic of ghost work, this paper documents the use of digital interfaces (and accompanying physical design features) to obfuscate the presence of human labour within digital automats, "Just Walk Out" grocery stores, and other sidewalk-level forms of retail automation. Although marketed as futuristic, AI-driven, and entirely autonomous, these service sector businesses generally rely on vast human infrastructures to restock products; clean and troubleshoot hardware; and even tele-operate entire apparatuses when the guiding software has failed. This presentation explores how such ostensibly autonomous retail technologies holistically employ architecture, digital interfaces, and onsite branding to downplay the presence of human labour and reinforce specific, lucrative sociotechnical imaginaries. Throughout, examples will be drawn from "Toronto 14-24," an ongoing data visualization project mapping the proliferation of these retail concepts in Toronto, Ontario over the past decade.
Short abstract:
The present paper makes the case for the application of emotional labour theory to data annotation, exploring the psychological impacts on workers tasked with annotating potentially trauma-inducing content in Kenya.
Long abstract:
The present paper applies emotional labour theory to data annotation, exploring the psychological impacts on workers tasked with annotating potentially trauma-inducing content in Kenya. The case study highlights the emotional toll on workers, the emotionality of training datasets, and interfaces of emotional regulation, underscoring emotional labour's relevance in understanding data annotation tasks. This approach challenges current paradigms by advocating for broader labour definitions that include emotional dimensions, calling for improved support and compensation for data annotators to enhance both worker well-being and training dataset quality.
Short abstract:
A critical genealogy and social theory of data annotation: we propose a framework for understanding the interplay of the global social conditions of data annotation with the subjective phenomenological experience of data annotation work.
Long abstract:
Data annotation remains the sine qua non of machine learning and AI. Recent work on data annotation highlights the importance of rater diversity for fairness, model performance, and new lines of research have begun to examine the working conditions for data annotation workers, the impacts and role of annotator subjectivity on labels Data annotation has become a global industry. This paper outlines a critical genealogy of data annotation; starting with its psychological and perceptual aspects. We draw on similarities with critiques of the rise of computerized lab-based psychological experiments in the 1970’s which question whether these experiments permit the generalization of results beyond the laboratory settings within which these results are typically obtained. Similarly, do data annotations permit the generalization of results beyond the settings, or locations, in which they were obtained? Moreover, Western psychology is overly reliant on participants from Western, Educated, Industrialized, Rich, and Democratic societies (WEIRD). Many of the people who work as data annotation platform workers, however, are not from WEIRD countries; most data annotation workers are based in Global South countries. Social categorizations and classifications from WEIRD countries are imposed on non-WEIRD annotators through instructions and tasks, and through them, on data, which is then used to train or evaluate AI models in WEIRD countries. What does it mean for non-WEIRD workers to annotate data from and about WEIRD societies? We propose a framework for understanding the interplay of the global social conditions of data annotation with the subjective phenomenological experience of data annotation
work.
Short abstract:
This paper examines the role of freelance platforms in current production of face analysis (FA) technologies. Building on qualitative research conducted with freelance platform workers, this paper argues that more often than not, these workers’ expertise becomes invisibilized during FA production.
Long abstract:
This paper examines the role of freelance platforms in current production of face analysis (FA) technologies. FA is developed by first collecting and annotating vast volumes of data, and subsequently designing models which may be trained upon these datasets. This work has been increasingly outsourced through complex labor networks. These neocolonial dynamics are often framed by ML producers in the global North as a matter of highly skilled engineers delegating the more menial work to low skilled workers, the latter’s contribution often remaining unacknowledged. This paper seeks to present a counter narrative, where the ‘high V. low skilled’ framing only makes sense as an attempt to rewrite and obscure the coloniality of the digital labor industry. Building on qualitative research conducted with freelance platform workers based in Kenya and Uganda, this paper argues that more often than not, these workers’ expertise becomes invisibilized during FA production. Yet what sets platform workers apart from their clients is not their abilities but rather the nature of this work and their geographical and socioeconomic context. The latter, shaped by complex colonial histories, benefits ML industries in the global North, which are able through the freelance platforms to access seamlessly services catering to their needs. Thus, freelancing platforms stand out as a new site of extractivist production for the ML industry, that enables the latter to exploit asymmetric dynamics established through colonisation, reaffirming “violence at scale” (Ricaurte 2021) as the modus operandi of global ML and FA production.
Short abstract:
This paper presents findings from an ethnographic study in annotation centres in India. We highlight the actors, practices, and policies that make up processes of creating datasets for AI, revealing their entanglements with infrastructural histories, global supply chains, and cultural constraints.
Long abstract:
Data annotation, an indispensable part of AI/ML system building, is a rapidly growing industry globally (Miceli & Posada, 2022; Irani, 2015; Poell et al., 2019). Yet, a model-centric, myopic view of AI (Sambasivan, 2022) affords little recognition to data annotation’s crucial contribution and wider challenges. Addressing this gap, we examine how human labour in data labelling for AI system-building is envisioned and operationalised. We draw on an ethnographic study of data work at an annotation company in India, during June - August 2022 at two of their centres located in semi-rural towns.
At these centres, first generation office workers, particularly women workers, are actively hired to support their financial independence and career development through tech work. However, the expectations, priorities and preferences of data requesters dictated worker schedules, time off and annotation tools at their disposal. We found that the choice of annotation tools varied with each project and typically, was dictated by the requesters. Whether the requesters provided the tools or licensed them from a third party, annotation teams rarely enjoyed agency over them. Far from being neutral or objective, we found that annotation practices and tools serve to assert conformity, and locate authority and control amongst a few actors.
In examining the material practices, global flows and social relations that shape data annotation and AI, we show how data labelling comes in contact with model building, impact sourcing, social entrepreneurship, and venture capital funding and in doing so, reflect on the effectiveness and fragility of AI systems.
Short abstract:
Applying an actor-network theory framework, this paper examines labor exploitation and power imbalances encoded within China's on-demand platform economies by tracing associations between algorithms, interfaces, workers, and platform owners.
Long abstract:
The convenient interfaces and intelligent algorithms powering China's booming platform economy obscure an invisible workforce subjected to economic instability and algorithmic control. Drawing on actor-network theory, this paper critically analyses the dynamics of labour exploitation as firms like Meituan, Didi and others adopt algorithmic management, data-driven dispatching, and integrated AI systems. Architectural mappings initially conceal real workers outside platform boundaries and activities. However, tracing associations reveals human actors driving the training data, content filtering, AI optimisation and microtask execution necessary for advancing automation and machine learning. These overlooked workers haunt projected technological futures, embodying the shadow labour force sustaining digital facades. Sociotechnical arrangements enact power asymmetries as platform owners govern through algorithmic protocols optimised for efficiency, scalability and capital growth over worker welfare. However, recognising points of vulnerability also reveals possibilities for reform through alternative network configurations that distribute definitional authority and economic stability more equitably across gig workforces and the platform owners increasingly dependent on their ghosted work. This paper contributes to research on the sociology of invisible work and global platform economies by highlighting concealed human actors struggling for justice within China’s growing on-demand infrastructure.