Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Roger Andre Søraa
(NTNU)
Yana Boeva (University of Stuttgart)
Hendrik Heuer (Center for Advanced Internet Studies (CAIS) and University of Wuppertal)
Milagros Miceli (Weizenbaum Institute)
Send message to Convenors
- Format:
- Traditional Open Panel
Short Abstract:
This panel engages in discussions of the sociotechnical transformations that AI brings to working life, creating new digital paradigms and epistemic cultures of “ghost work.” We aim to critically examine the obscured labor and the profound implications of advancements in AI.
Long Abstract:
The advent of AI not only transforms working paradigms but also reframes epistemic cultures and the very nature of labor. This panel dives into these profound transformations to critically scrutinize how different types of AI are impacting labor sectors and the multifaceted implications accompanying AI advancements. A central topic we seek to explore revolves around the concept of "ghost work" (Gray & Suri 2019)—where work is seemingly done invisibly by technology, but have indeed humans present but hidden in the machines. Therefore, we ask: To whom is this work truly ghostly? What about human agency in machine worlds? What historical entanglements of "ghost work" with colonial legacies and labor exploitation can be found? What constitutes fair structuring of societal organization when AI systems thrive on obscured labor?
By elevating voices and perspectives of so-called ghost workers, we aim to demystify the values inherent in such roles. While the term "ghost work" often carries problematic relations, there lie potential tangible benefits within these roles. The challenge lies in retaining the merits of such work while addressing inherent issues. As we enter/approach a new era where generative AI platforms like ChatGPT render every user a potential contributor of training data, we must also craft a vocabulary to articulate the emergent forms of ghost work.Moreover, there's a pressing need to spotlight "unwitting ghost work"—instances where individuals' creations are unwittingly harnessed to train foundation models.
How can the STS community investigate those whose labor is inadvertently obscured, erases, or co-opted? Drawing inspiration from critical ethnographic scholarship on AI and digital platforms, we recognize that human engagement with automated systems persists in concealed forms. This panel invites submissions that unearth instances of human labor in AI transformations. We aspire to chart a comprehensive map of "ghost work" and its kin.
Accepted papers:
Session 1Roger Andre Søraa (NTNU) Shan Wang (Norwegian University of Science and Technology) Silvia Ecclesia (Norwegian University of Science and Technology)
Short abstract:
This presentation explores the impact of AI on worker hiring and recruitment processes, focusing on the sociotechnical changes that AI brings. It emphasizes AI's role in contemporary recruitment and discusses "HR ghost work," and its ethical implications, especially for hiring practices.
Long abstract:
This presentation explores the realm of ghost work within the context of worker hiring processes and HR, where artificial intelligence (AI) is increasingly used to make recommendations and potentially decisions regarding job recruitment and hiring. The primary objective is to shed light on the profound sociotechnical transformations that AI brings to bear on contemporary recruitment practices. I look at the interplay between human decision-making and algorithmic systems in the recruitment landscape, emphasizing that even when AI systems take the spotlight in recruitment, humans remain as "ghosts-in-the-machine." The project provides a comprehensive examination of how algorithmic technologies, driven by artificial intelligence and machine learning, have radically transformed the recruitment domain. I explore how these technological advancements have given rise to 'AI ghost work in hiring processes,' a phenomenon where hidden human labor and prevalent machine intelligence converge to streamline and enhance the hiring process, albeit at times obscuring and concealing human tacit knowledge and expertise. My analysis reveals the multifaceted nature of ghost work, which significantly impacts the recruitment landscape. I look at the ethical implications of these sociotechnical transformations, including questions surrounding fairness, bias, and transparency in algorithmic decision-making. My focus centers on how Human Resource Management staff (HRM) are affected by AI utilization in this field, offering insights into the evolving dynamics of worker hiring processes, emphasizing the pivotal role played by sociotechnical transformations where AI intersects with HR practices.
Mathew Iantorno (University of Toronto)
Short abstract:
This paper documents the use of digital interfaces, architecture, and onsite branding to downplay the presence of human labour and reinforce specific, lucrative sociotechnical imaginaries within the context of streetside retail automation in North American urban centres.
Long abstract:
This paper documents the use of interfaces, architecture, and branding to present human service sector work as automated processes within the context of streetside retail automation in North America. In their 2019 book, "Surrogate Humanity," Atanasoski and Vora speak to the trend of rendering workers invisible on TaskRabbit and other micro-work platforms. Embodied labour such as picking up groceries and cleaning homes is rendered abstract by practices of anonymization and asynchronous scheduling embedded into the interfaces of these apps, enabling "the fantasy that technology is performing the labour." Reiterating the ethos of ghost work, the authors outline that the innovation of these platforms is not algorithms or AI: "the innovation is the interface." Departing from the online platforms characteristic of ghost work, this paper documents the use of digital interfaces (and accompanying physical design features) to obfuscate the presence of human labour within digital automats, "Just Walk Out" grocery stores, and other sidewalk-level forms of retail automation. Although marketed as futuristic, AI-driven, and entirely autonomous, these service sector businesses generally rely on vast human infrastructures to restock products; clean and troubleshoot hardware; and even tele-operate entire apparatuses when the guiding software has failed. This presentation explores how such ostensibly autonomous retail technologies holistically employ architecture, digital interfaces, and onsite branding to downplay the presence of human labour and reinforce specific, lucrative sociotechnical imaginaries. Throughout, examples will be drawn from "Toronto 14-24," an ongoing data visualization project mapping the proliferation of these retail concepts in Toronto, Ontario over the past decade.
Noah Khan (University of Toronto)
Long abstract:
The present paper focuses on the ethics of the student-ChatGPT encounter, advancing the argument that this encounter is not a simple manipulation of technological signs and symbols, but instead a deeply haunted semiotic engagement with exploitative systems of labour. The investigation is underpinned by a material semiotic perspective, informed by Jacques Derrida's (1993/1994) Specters of Marx, offering a critical lens through which to view the encounter. This perspective recognizes that the encounter carries with it deeper symbolic and societal meanings, challenging the notion of neutral technological interaction that conceals an underclass of labourers, the existence of which troubles the various promises of generative artificial intelligence (GAI) futures in education. The paper emphasizes the need for a critical material semiotics approach that recognizes and addresses the hidden labour dynamics in ChatGPT's minimalist semiotics; the pedagogical imperative proposed is to recognize and confront the exploitation inherent in this process such that students have access to symbols through which they can understand a broader scope of GAI ethics (such as maps that chart GAI labour, ghostworker stories, etc.) The paper calls for the development of pedagogical materials that engage these material semiotics in a larger effort to foster a dialogical, reflective educational environment that transcends technical proficiency, encouraging ethical and societal considerations. In its conclusion, the paper offers a few pedagogical materials for educators to incorporate critical material semiotics into their teaching, including curriculum development, field trips, and resources that highlight the hidden labour behind GAI, fostering deeper ethical discussions around technology use.
Marco Marrone (University of Salento)
Long abstract:
Platform labor process is associated to low skill jobs made of fragmented and routinary tasks. This view - synthesized by the idea of a job that can be done simply with “a bike and a smartphone” - is also common among workers who relates it to their vulnerability and exploitation. But is it really like that? Or do platofrm workers know more they can tell? On a more careful look, in fact, more than automatizing and simplifying the labor process, AI transformations are moving in the opposite direction, requiring workers to develop skills that are often unrecognized (and unpaid). By adopting a mixing approach between labor process theory and STS, such "ghost skills" will be investigated with a total of 35 interviews conducted in the city of Bologna (Italy) with workers of three among the most popular platforms: Deliveroo, Airbnb and Helpling. Firstly, by looking at their affordances, intended as those properties of the platform that, at the same time, enable and limit the possible uses of workers to conduct their activities. Secondly, by investigating how such affordances moves platform workers to develop a set of ghost skills that are necessary to succesfully compete with other workers and to maximize their possible incomes. In the conclusion, after highlighting the potential of the dialogue between labor process theory and sts may have in investigating the "ghost" side of digital labor, it will be stressed the key role that the recognition of such skills may have in empowering these workers.
Assia Wirth
Long abstract:
This paper examines the role of freelance platforms in current production of face analysis (FA) technologies. FA is developed by first collecting and annotating vast volumes of data, and subsequently designing models which may be trained upon these datasets. This work has been increasingly outsourced through complex labor networks. These neocolonial dynamics are often framed by ML producers in the global North as a matter of highly skilled engineers delegating the more menial work to low skilled workers, the latter’s contribution often remaining unacknowledged. This paper seeks to present a counter narrative, where the ‘high V. low skilled’ framing only makes sense as an attempt to rewrite and obscure the coloniality of the digital labor industry. Building on qualitative research conducted with freelance platform workers based in Kenya and Uganda, this paper argues that more often than not, these workers’ expertise becomes invisibilized during FA production. Yet what sets platform workers apart from their clients is not their abilities but rather the nature of this work and their geographical and socioeconomic context. The latter, shaped by complex colonial histories, benefits ML industries in the global North, which are able through the freelance platforms to access seamlessly services catering to their needs. Thus, freelancing platforms stand out as a new site of extractivist production for the ML industry, that enables the latter to exploit asymmetric dynamics established through colonisation, reaffirming “violence at scale” (Ricaurte 2021) as the modus operandi of global ML and FA production.
Srravya Chandhiramowuli (University of Edinburgh) Alex Taylor (University of Edinburgh) Sara Heitlinger (City, University of London) Ding Wang
Long abstract:
Data annotation, an indispensable part of AI/ML system building, is a rapidly growing industry globally (Miceli & Posada, 2022; Irani, 2015; Poell et al., 2019). Yet, a model-centric, myopic view of AI (Sambasivan, 2022) affords little recognition to data annotation’s crucial contribution and wider challenges. Addressing this gap, we examine how human labour in data labelling for AI system-building is envisioned and operationalised. We draw on an ethnographic study of data work at an annotation company in India, during June - August 2022 at two of their centres located in semi-rural towns.
At these centres, first generation office workers, particularly women workers, are actively hired to support their financial independence and career development through tech work. However, the expectations, priorities and preferences of data requesters dictated worker schedules, time off and annotation tools at their disposal. We found that the choice of annotation tools varied with each project and typically, was dictated by the requesters. Whether the requesters provided the tools or licensed them from a third party, annotation teams rarely enjoyed agency over them. Far from being neutral or objective, we found that annotation practices and tools serve to assert conformity, and locate authority and control amongst a few actors.
In examining the material practices, global flows and social relations that shape data annotation and AI, we show how data labelling comes in contact with model building, impact sourcing, social entrepreneurship, and venture capital funding and in doing so, reflect on the effectiveness and fragility of AI systems.
Andrew Smart (Google) Sonja Schmer-Galunder (University of Florida) Mark Diaz Ding Wang ERIN VAN LIEMT Atoosa Kasirzadeh Ellis Monk (Harvard University)
Long abstract:
Data annotation remains the sine qua non of machine learning and AI. Recent work on data annotation highlights the importance of rater diversity for fairness, model performance, and new lines of research have begun to examine the working conditions for data annotation workers, the impacts and role of annotator subjectivity on labels Data annotation has become a global industry. This paper outlines a critical genealogy of data annotation; starting with its psychological and perceptual aspects. We draw on similarities with critiques of the rise of computerized lab-based psychological experiments in the 1970’s which question whether these experiments permit the generalization of results beyond the laboratory settings within which these results are typically obtained. Similarly, do data annotations permit the generalization of results beyond the settings, or locations, in which they were obtained? Moreover, Western psychology is overly reliant on participants from Western, Educated, Industrialized, Rich, and Democratic societies (WEIRD). Many of the people who work as data annotation platform workers, however, are not from WEIRD countries; most data annotation workers are based in Global South countries. Social categorizations and classifications from WEIRD countries are imposed on non-WEIRD annotators through instructions and tasks, and through them, on data, which is then used to train or evaluate AI models in WEIRD countries. What does it mean for non-WEIRD workers to annotate data from and about WEIRD societies? We propose a framework for understanding the interplay of the global social conditions of data annotation with the subjective phenomenological experience of data annotation
work.
Tianyu Zhao (University College London) Taoyue Wang (University College London)
Long abstract:
The convenient interfaces and intelligent algorithms powering China's booming platform economy obscure an invisible workforce subjected to economic instability and algorithmic control. Drawing on actor-network theory, this paper critically analyses the dynamics of labour exploitation as firms like Meituan, Didi and others adopt algorithmic management, data-driven dispatching, and integrated AI systems. Architectural mappings initially conceal real workers outside platform boundaries and activities. However, tracing associations reveals human actors driving the training data, content filtering, AI optimisation and microtask execution necessary for advancing automation and machine learning. These overlooked workers haunt projected technological futures, embodying the shadow labour force sustaining digital facades. Sociotechnical arrangements enact power asymmetries as platform owners govern through algorithmic protocols optimised for efficiency, scalability and capital growth over worker welfare. However, recognising points of vulnerability also reveals possibilities for reform through alternative network configurations that distribute definitional authority and economic stability more equitably across gig workforces and the platform owners increasingly dependent on their ghosted work. This paper contributes to research on the sociology of invisible work and global platform economies by highlighting concealed human actors struggling for justice within China’s growing on-demand infrastructure.
Sofie Kronberger (University of Vienna)
Long abstract:
In the increasingly digitized, datafied, and automated world of biomedicine, Machine Learning (ML) promises not only new forms of medical knowledge production but also patient empowerment. In the world of hearing aids, ML-based tools showcase new, seemingly automated ways of knowledge production. Users can now indirectly impact their hearing aid configurations by giving feedback through mobile phone applications. So far, this task has been reserved for audiologists, who relied on a mix of auditory tests and listening to their patient’s experiences – often over the span of several weeks or months. This time-intensive practice is now being accompanied and in some cases replaced in the form of machine learning-based recommender systems.
Based on six months of ethnographic fieldwork in Austria, Germany, and Denmark this paper examines the tedious, time-sensitive, and often physically painful task of users of AI-based hearing aids in training not only their bodies but their hearing aids as well. Looking at this emerging practice not as a form of empowerment, but as a form of labor allows us to better understand the economic, gendered, and racialized inequalities embedded in the medical care for hearing aid users. I examine how existing understandings of “good” and “bad” patients are interlocking with emerging data economies and the incentives that strengthen users’ need to take part in training “their” algorithms.