Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Livia Garofalo
(Data and Society Research Institute)
Alexa Hagerty (University of Cambridge)
Send message to Convenors
- Discussant:
-
Emily Martin
(New York University)
- Format:
- Roundtable
- Sessions:
- Thursday 8 April, -
Time zone: America/Chicago
Short Abstract:
This roundtable addresses the potential contributions of psychological anthropology to the study and critique of data-driven technological systems like artificial intelligence, which are largely based on theories and methods from experimental psychology.
Long Abstract:
The Cambridge Analytica scandal and the recent Netflix documentary "The Social Dilemma" have thrust the psychological aspects of technology into public conversation.
Yet, concepts like addiction, attention and empathy underwriting such critiques are often taken as transparent and universal. Scholars have made crucial interventions highlighting the raced and gendered dimensions of these systems, disrupting claims of technological neutrality (Benjamin 2019, Noble 2018). However, theories about the human mind and behavior informing technological design are drawn from social and behavioral psychology and remain largely uninterrogated.
Psychological anthropology has centered pluralism and subjectivity in ways that are distinct from experimental psychology. While psychological anthropologists have tools to critique these sociotechnical systems, our perspectives have been largely absent from public conversation. As Emily Martin has noted, anthropologists have "worked so hard to identify alternative visions of human purpose...Now it is time to shout them from the rooftops."
In this roundtable, we probe the psychological models and assumptions about human minds that inform technological design. Inspired by Emily Martin's work on the interplay of empirical and experimental approaches, we invite discussion of:
how psychological anthropology may provide productive friction to the individualistic and universalist models underwriting emerging technologies;
role and limitations of psychological anthropology, given the field's colonial origins, to engage with transformative technologies;
how an anthropological perspective might enrich our understanding of how the human is produced by technology and on "humane" technology."
Accepted contributions:
Session 1 Thursday 8 April, 2021, -Contribution short abstract:
Popular discourses on digital media practices are often entrenched in psychologized and apolitical frameworks ascribing labels of addiction and deficiencies to marginalized youths. I argue that the ethnographic research on mundane digital practices erodes spectacular narratives of digitalization.
Contribution long abstract:
Popular narratives on digital media practices are often entrenched in psychologized and apolitical frameworks that ascribe reductionist labels of addiction and deficiencies to digital practices of marginalized youths in particular. I argue that the ethnographic research on mundane and apparently banal digital practices, such as “mindless” scrolling through social media feeds, erodes spectacular narratives of digitalization. In my ethnographic study into Viennese youth centers in 2018 and 2019, I linked polemicized design practices, such as erasing effort and enabling flow, with the everyday struggles of marginalized youths faced with chronic boredom and a crisis of agency. Insights from the fieldwork suggest that digital practices, such as infinite scrolling, represent a social pulse against the backdrop of chronic boredom, as much as they cement boredom. Furthermore, what often lingers underneath arguments solely focusing on addiction and digital literacy, are the assumptions that the time spent online in an apparently “mindless” and inefficient manner is inherently problematic. Such approaches further marginalize youths, who are faced with extended periods of unemployment, waiting, and boredom and have few alternatives to pass the time. Hence, psychological anthropology can reintroduce ethnographic and critical complexity to psychological frameworks that remove the individuals from their social and political context and test insular aspects of “being online”, while implying causal relationships between psychological phenomena, such as boredom, anxiety, and depression on the one hand, and digital media technologies on the other.
Contribution short abstract:
A computer providing effective psychotherapy is a controversial prospect. Objections to it are often framed in terms of its damaging effect on the "therapeutic alliance," the interpersonal "active ingredient" of therapy. Drawing on original fieldwork in Australia, I consider this claim.
Contribution long abstract:
The prospect of a computer providing effective psychotherapy is both a tantalizing technological feat and a potentially disturbing rupture in human-human relations. It is also an idea almost as old as modern computing itself: Joseph Weizenbaum’s famous ELIZA software in the 1960s– a crude but surprisingly engaging Rogerian– finds echoes in contemporary automated CBT programs and AI chat-bots. These most recent efforts to automate the talking cure have been applauded, but also met with skepticism and alarm about the encroachment of machines on what is meant to be a delicate and subtle interpersonal process.
In this paper I draw on original ethnographic fieldwork conducted with psychologists in Australia, where such “e-mental health” projects receive funding from the federal government as public health initiatives. Reflecting professional orthodoxy, many informants object to such automated interventions on the grounds that the “therapeutic alliance” is absent or perverted– the nebulous “active ingredient” of psychotherapy characterized by mutual recognition and interpersonal rapport. Still others, often those researching and championing these automated interventions, claim that the alliance’s primacy has been overstated or question it altogether.
I argue that disagreements around this contested professional idiom highlight deep and unresolved questions about psychology. Specifically, about therapy as the exclusive domain of the skilled listener, the necessary physical limits of the session, and the unique ability of humans to heal one another.
Contribution short abstract:
In this paper, I approach the links between psychological models and technological design through an in-depth study of current research in machine learning. In particular, I elaborate on Google DeepMind’s interest in idleness and trace it back to the 1990s psychology laboratory.
Contribution long abstract:
In this paper, I approach the links between psychological models and technological design through an in-depth study of machine learning research. I elaborate on Google DeepMind’s interest in implementing idleness in machine learning algorithms and trace it back to the 1990s psychology laboratory, when so-called resting state research drew attention to the processes that occur when volunteers’ brains are supposedly at rest.
Resting state research reversed the reigning experimental paradigm of cognitive neuroscience, put an emphasis on cognitive bandwidth, and linked creativity and pathology to the subject’s ability to control "offline" thought. Against this backdrop, mind wandering has been reconceived as a system-critical mode of information processing and mindfulness is increasingly positioned as a strategy to keep the pathological effects of sustained attention at bay.
Since Google’s machine learning algorithms “don’t even have Christmas off” (Google DeepMind’s CEO Demis Hassabis in an interview with the Guardian), researchers are experimenting with mechanisms that implement through code what happens in human brains while we are slacking or asleep. In my paper, I engage with the situated character of the human that these algorithms implement, provide some insights into how this may affect psychological thinking, and suggest avenues for diversifying what machine learning algorithms could be.
Contribution short abstract:
Over the past decade, artificial intelligence has been one of the key drivers of innovation in China. Tracing the local history of AI-related concepts, this presentation highlights some features of various discourses around them, and their implication for the development of these technologies.
Contribution long abstract:
Over the past decade, artificial intelligence has been one of the key drivers of innovation in the Chinese tech industry. Government plans and policy recommendations have driven investment in AI research and development, and countless tech companies have set up AI research labs, reorienting their products toward the provision of automated services and “smart” technologies. This has driven a society-wide hype around artificial intelligence, widely seen as a key technology for the future of the country’s prosperity and global standing. At the same time, mounting concerns about privacy and safety resulting from the application of AI technologies in everyday settings have started fueling pushback, and are subject of increasingly widespread debates. While the development of artificial intelligence in China has largely followed the Silicon Valley model, the discourse around artificial intelligence in China is also shaped by a longer history of theories of computation, automation and intelligence. For example, the term “artificial intelligence” itself is commonly translated as rengong zhineng, with the term zhineng indexing a sort of “intelligence” combining the philosophical concepts of zhihui “wisdom” and nengli “capability”. Zhihui is also commonly used to translate the adjective “smart” in terms like “smart city” or “smart logistics”, pointing towards a different articulation of theoretical concepts through which these technologies are explained, marketed and interpreted by the government, private companies and the public. Tracing the history of AI-related concepts in China, this presentation highlights some characteristic differences of various discourses around them, and their implication for the development of these technologies.
Contribution short abstract:
An exploratory anthropological investigation of the biopolitics of AI-powered emotion recognition systems in behavioural training programs in childhood education and for individuals on the Autism spectrum.
Contribution long abstract:
Anthropologists have long been attuned to the epistemological gap between how bodies feel and how individuals make sense of how they feel (White 2019). Diversifying our understandings of the spectrum of human emotion and psychic life is part of our discipline’s mandate. Psychological anthropology is urgently needed to critically evaluate the design and development of emotional recognition technologies. In the field of affective computing, computer scientists are designing emotion recognition technologies to facilitate the emotional and social development of grade school students and individuals on the autism spectrum (Picard 2020; Aylett 2018). To be digitally scalable, current innovations in affective computing rely on a highly reductive model of emotion theory grounded in Western psychological science (Barrett 2019). Affective computing labs have become speculative experimental sites that attempt to compute and mimic human feeling. Yet, the technologies they deploy run the risk of reducing any “emotional” divergence into Western norms of standardized behaviour and increasing the social stigmatization of Autism Spectrum Disorder. This paper seeks to examine the biopsychic imaginaries and biopolitics (Stevenson 2014) surrounding the engineering of AI-powered emotional recognition technologies and discuss the consequences of attributing caring and teaching roles to digital companions.