Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Sahana Udupa
(LMU Munich)
Peter Hervik (NOISE - Network of Independent Scholars in Education (Copenhagen))
Send message to Convenors
- Formats:
- Roundtable
- Mode:
- Face-to-face
- Location:
- Facultat de Geografia i Història 221
- Sessions:
- Thursday 25 July, -
Time zone: Europe/Madrid
Short Abstract:
Artificial intelligence is undergoing such an explosive development that Silicon Valley pundits have described it as the “Oppenheimer Moment”. This roundtable will ask how AI is doing and undoing anthropology as we grapple with technological development of monumental reach and ferocity.
Long Abstract:
Artificial intelligence is undergoing such an explosive development that Silicon Valley pundits have adopted the term “Oppenheimer Moment” to capture the gravity of the current moment. Large language models such as ChatGPT are becoming a key source of text writing and learning. Implant technology is prepped to realize a dream of merging biological and machine intelligence. AI is mastering hyper-personalized skills in the field of intimate relations, whether as elderly care, dating, or recruitment for political ideologies. These issues are of far-reaching and even apocalyptic dimension that Oppenheimer also went through, except he knew what the end result was of the product he was leading.
Philosopher David Chalmers (2022) has suggested that AI products are part of a system of zombies: outwardly, they behave like a conscious human being but inwardly they have no conscious experience or feelings. This captivating formulation triggers distinct lines of exploration for anthropologists. First, how might anthropology of ethics bear on these novel “zombie” systems (Das 2012; Fassin 2015; Hervik 2018)? Second, “zombies” should not obscure material-colonial conditions of labour and data extraction as well as epistemic practices of category building and labelling that lie at the root of AI development (Udupa, Maronikolakis & Wisiorek 2023). Finally, what is the role of experientially grounded, reflexive research which forms the core of anthropological knowledge? This roundtable will take up mega questions on these three scales of inquiry, asking how AI is doing and undoing anthropology as we grapple with technological development of monumental reach and ferocity.
Accepted papers:
Session 1 Thursday 25 July, 2024, -Paper short abstract:
This intervention discusses the proliferation of "entitifications" of artificial intelligence by reviewing some notable examples - the philosophical zombie, the stochastic parrot, and the masked shoggoth - to argue that this emerging menagerie of entities urgently demands anthropological attention.
Paper long abstract:
A decade of development in artificial intelligence since the deep neural network breakthroughs of 2012 has introduced countless new social actors in the everyday lives of people around the globe. From technical innovations like transformer models and encoder/decoder architectures to more imaginative personifications of machine learning processes, it is undeniable that humans anthropomorphize new technologies as they attempt to make sense of them. This intervention discusses the proliferation of personifications (or perhaps more accurately, "entitifications") of artificial intelligence by reviewing some notable examples - the philosophical zombie, the stochastic parrot, and the masked shoggoth - to argue that this emerging menagerie of entities urgently demands anthropological attention. Much of anthropology relies on the long-term, dialogic engagement with the Other, and if the discipline wants to take artificial intelligence seriously it has to figure out how to also relate to these new ethnographic interlocutors - neither by reducing them to mere technological tools nor by uncritically accepting the characterizations offered by corporations or other discipines. Debates in in both cyborg anthropology and multispecies ethnography have consistently argued for the need to expand the scope of both anthropos and ethnos to other non-human actors. As automated systems not only populate societal imaginaries but also become active participants in the shaping of social worlds, I argue that it is necessary to take these new ethnographic interlocutors seriously.
Paper short abstract:
Defining responsible use of generative AI in academic writing cannot afford to ignore the political economy of knowledge appropriated via stochastic parroting. An anthropology that stakes its reproduction on becoming AI literate risks becoming ethically and epistemologically illiterate.
Paper long abstract:
Soon after the launch of large language models (LLMs) such as OpenAI’s ChatGPT, the world of academia went into a justified panic that associated the advent of generative AI with the end of education, of scientific research and, generally, a “textpocalypse” (Kirschenbaum 2023). Universities began the arduous process of compiling guidelines for students’ use of AI, recalibrating standards of honesty and integrity, and bootstrapping programs of AI literacy.
My intervention is grounded in such processes – experimenting with AI tools in the classroom and participating in a department (Sociology and Anthropology) committee charged with establishing thresholds of responsible use of generative AI in social science research and academic writing. I focus on the leftovers of such bureaucratic processes and specifically the political economy of knowledge appropriated and reproduced via the stochastic parroting characteristic of LLMs. While much attention should go to the extractive labor that makes AI possible, the ownership structures that circumscribe the operation of AI are just as important. Not only is AI the property of corporations with vested profit motives, but so is the substance of knowledge (prompts and knowledge files) produced via AI-user interactions. Is the framework of intellectual property protection enough against the ethical infringements inherent in the prompting of AI with qualitative ethnographic data? Moreover, what are the epistemological and political risks for anthropology in outsourcing the interpretation of ethnographic data to AI? An anthropology that stakes its reproduction on becoming AI literate might very well end up being ethically and epistemologically illiterate.
Paper short abstract:
The informational asymmetry in AI development, controlled by a handful of corporations, promotes folklore. Rather than dismissing AI folklore, it should be explored, as it can reveal how technologies are woven into our lives and attempts to master and be inspired by ongoing cultural transformation.
Paper long abstract:
The goal of my intervention is to introduce the notion of AI folklore, which encompasses the vernacular culture of beliefs, stories, and predictions surrounding the advent of AI. I suggest that everyone, including anthropologists, contributes to AI folklore due to the collective inability to fully grasp current developments. The informational asymmetry in AI development, predominantly controlled by a few corporations, provides a fertile ground for folklore. Rather than dismissing AI folklore, however, I propose making it the focal point of our inquiry, as it provides a lens to examine historically ingrained and situationally emerging structures of thought and feeling, practices, and future anticipations.
Incorporating AI folklore into our research, allows us to delve into enthusiastically shared canonical narratives, ambivalent responses, and anticipation in future making. The growing prevalence of learning and responsive systems underscores the relevance of studying AI folklore, especially since folklore informs technical realities. When we encounter compelling stories about how AI will disrupt our lives, or how it operates, these narratives are likely shape AI-related practices and responses. Therefore, AI folklore extends beyond the personal, domestic, and imaginative realms, influencing technical, political, and economic realities. AI folklore makes it clear that the future is not determined by technologies, but by how these technologies are woven into our endeavors and attempts to master and be inspired by ongoing cultural transformation. Here, we are all at the edge of the future, and it matters what we do and think that is occurring at this critical moment.
Paper short abstract:
This study examines AI's application in Turkey's leading NGOs, highlighting the challenges in data set expansion and open access amidst Big Tech and government policies. Initial findings suggest NGOs prioritize data expansion over bias, navigating restrictions to responsibly grow data sets.
Paper long abstract:
I explore the application of AI in the context of Turkey's leading NGOs, informed by the material-colonial aspects of labor and data extraction and the epistemic practices underlying AI development. This study aims to illustrate how AI's usage in local settings, particularly among these NGOs, can differ significantly from global AI discussions. A key observation is the NGOs' focus on expanding data sets rather than addressing data bias, necessitating open access from public entities that are often reluctant to provide it. This challenge requires innovative strategies to expand data sets responsibly and accountably.
Additionally, the research highlights the critical issue of data accessibility, emphasizing the ideological implications of keeping data open and free. This raises the question of whether organizations should use their data for competitive growth or maintain it for the common good. A notable gap is identified in the availability of Turkish language AI applications and localized databases despite ongoing efforts in Turkish language processing. This indicates a broader issue in localized data processing capabilities.
The study also addresses the complex relationship between Big Tech, civil society, and public policies, with Big Tech platforms seeking to expand their market while aligning with government AI policies. This may lead to tensions with civil society. Overall, these findings contribute to understanding the unique challenges and dynamics at play in the local application of AI technology.
Paper short abstract:
The purpose of this paper is to explore ways in which anthropology can contribute to the analysis of AI through what we might call the subjectification of Turing machines: the human capacity of humans to turn Turing machines into subjects and/or the capacity of Turing machines to be subjects.
Paper long abstract:
Since the inception of modern computer science, a persistent inquiry has revolved around the potential for computers to attain consciousness. Alan Turing introduced the renowned 'imitation game' or Turing test in 1950, explicitly aiming to unveil the hypothetical subjective nature of a computer. However, only in recent years—perhaps even recent months—have the astonishing advancements in AI, particularly in LLM, thrust the issue of 'consciousness' or 'subjectivity' of computers into the spotlight once again. The crux of the matter posed by these innovative developments is twofold: firstly, can AI machines achieve consciousness? Secondly, how can we discern it? The challenge we face is to explore avenues through which anthropology can contribute to this discourse, which, thus far, has predominantly engaged philosophers, computer scientists, and cognitive scientists. I propose five possible lines of discussion:
1. Roger Penrose’s theory of consciousness, positing that Turing machines inherently lack the capacity for consciousness.
2. Giulio Tononi’s IIT of consciousness, which delineates the physical conditions fostering the emergence of qualia in information-processing devices.
3. Anthropological theories of animism/anthropomorphism, which investigate the circumstances under which subjectivity is ascribed to non-human entities.
4. Structuralist theory of mind and meaning, as developed by Lévi-Strauss, elucidating binary oppositions as the genesis of meaning, intentionality, and human subjectivity.