Log in to star items.
- Convenors:
-
Laia Soto Bermant
Beth Singler (University of Zurich)
Send message to Convenors
- Formats:
- Panel
Short Abstract
This panel asks how anthropology can address the ethical, epistemological and political challenges posed by the development of complex intelligent systems that blur the boundaries between human and machine, organic and non-organic, and living and non-living beings.
Long Abstract
What does it mean to be intelligent, sentient, or self-aware in an age when machines speak, algorithms predict, and humans increasingly rely on artificial cognition to make sense of the world? As the boundaries between human and machine thought blur—and as new developments apply AI to decoding animal communication—anthropology is uniquely positioned to interrogate the moral and epistemic assumptions that have long defined the “human condition.” This panel asks whether we are now entering a transhuman condition: a moment in which intelligence itself becomes distributed, relational, and contested.
While debates in AI ethics and governance tend to treat intelligence as a measurable property, anthropological and STS perspectives reveal it as a cultural and moral construct that legitimises particular hierarchies of life and value. Yet as artificial, organic, and hybrid intelligences increasingly participate in the production of knowledge, new philosophical, legal, and ethical questions arise: who—or what—can be recognised as a thinking subject, a moral agent, or a bearer of rights?
We invite ethnographic and theoretical contributions that explore how intelligence, sentience, and personhood are enacted, contested, or reimagined in laboratories, digital environments, therapeutic settings, and everyday life. Topics may include AI–human interaction and embodiment, the use of AI in studying nonhuman communication, developments in robotics and organic AI, moral and legal recognition of nonhuman beings, or the affective and epistemic dimensions of more-than-human relations.
Bringing together anthropologists and allied scholars, this panel seeks to rethink the category of intelligence—and, with it, the very idea of the human—in light of the social, ethical, and ontological transformations brought about by artificial and nonhuman minds.
Accepted papers
Session 1Paper short abstract
This paper examines how people engage with algorithms through relational and moral reasoning rather than technical understanding. Drawing on ethnographic research on Douyin and WeChat, it argues that algorithmic engagements are relational practices that manage visibility and social consequences.
Paper long abstract
Drawing on ethnographic fieldwork on Douyin and WeChat in a small Chinese city, this paper examines how ordinary social media users engage with algorithmic systems through everyday moral and relational reasoning rather than treating them as purely technical tools. Through cases such as refusing to provide “not interested” feedback, avoiding or accepting acquaintance recommendations, and strategically managing algorithmic visibility, the paper shows that algorithmic engagement is directed toward specific relationships and toward the negotiation of moral concerns, cultural norms, and social risks. Artificial intelligence algorithms operate through relations and become consequential mediators. Without taking relationality into consideration, specific algorithmic practices cannot be adequately understood.
This article challenges the long-standing tendency toward methodological individualisation in the social scientific study of algorithms. It contends that algorithm research should move beyond examining interactions between individual users and technical systems, and instead treat relationality as a constitutive condition of algorithmic action rather than merely an external influencing factor. Intelligence here is not understood as a measurable property of either humans or machines, but as an emergent and contested quality enacted through social relations. By foregrounding how algorithms constitute moral and relational actors in everyday life, the paper contributes to anthropological debates on intelligence, agency, and personhood in an age of distributed and more-than-human cognition.
Paper short abstract
Does the computer think? Drawing on ethnography with remote IT workers in Gurugram, India, this paper examines how LLMs mediate the triad of morality, personhood, and intelligence. It introduces “computational anxiety” to show how these triadic properties are distributed in socio-technical field(s).
Paper long abstract
Does the computer think, and if so, what does it mean to think in a world where machine inferences increasingly participate in everyday cognition? This paper revisits long-standing philosophical debates on mind, computation, and understanding by grounding them ethnographically in contemporary practices of remote IT workers in Gurugram, India whose understandings of morality, personhood and intelligence are mediated by large language models (LLMs).
Building on earlier critiques of technological singularity, so-called mind uploads and the computational theories of mind, the paper shifts attention from speculative futures to the "mundaneness", where intelligence is already distributed across humans, algorithms, platforms, and infrastructures. Drawing on ethnography with remote IT workers, I introduce the concept of “computational anxiety,” and how it reshapes what it means to understand the traid of morality, personhood and intelligence as objects are treated like thinking subjects.
Theoretically, the paper engages debates on computation and mind by revisiting the distinction between knowing-that and knowing-how, arguing that while LLMs may never experience or understand in a phenomenological and psychoanalytic sense, they nonetheless reorganise the everyday epistomologies of understanding. In dialogue with STS and psychoanalytic approaches to Gen. AI, particularly by employing Luca Possati's "technoanalysis", I argue that intelligence is not an internal property of brains or machines, but an anxious, and socio-technical practice. Further, this dialogue unfolds how emerging technologies open new pathways for understanding “to understand” sentience by embedding computational epistemologies into everyday life. Lastly, we will rethink what it could mean for Trans/[H]umans in a post-colonial and polarised world?
Paper short abstract
My presentation interrogates discursive polarities of the posthuman. By contrasting “technocratic” with “critical” framings of the “human condition”, I show how AI discourse serves as a catalyst for renegotiations of the human subject, both along and against established epistemic orders.
Paper long abstract
The “floating signifier AI” (Lucy Suchman) acts as a catalyst for “polarising forces” regarding not only how “intelligent” technologies are developed, but also moral, political, and epistemological questions. It encompasses controversial entanglements between humans and non-humans while constituting an imaginative space where experiences of difference and ambivalence are situated within specific narrative configurations.
Crucially, the question of the “conditio humana”—and who qualifies as a sentient, intelligent subject—is being renegotiated. The “human” is framed in divergent ways: on the one hand, as “evil” or “inferior” compared to “his” machines, consequently requiring a “solutionist” approach. On the other hand—and to a certain degree complementary to this—the “human” primarily denotes the “modern,” Western, white, male subject.
The research question for my presentation is how the “critical” posthumanist question—what role humans can and will play in a more-than-human world—stands in a constitutive polarity to “technocratic” transhumanist/posthumanist framings.
Based on an analysis of how media engage with “AI” and how these narratives are received in online comment sections of news dailies in Austria, I will demonstrate how “critical” and “technocratic” framings are being enacted. I argue that an ambivalent stance toward technology is paralleled here by an almost cynical attitude regarding the role of “the human,” a dynamic which effectively reproduces and stabilizes established epistemic orders.
Focusing on the thematic complex of AI, my presentation seeks to interrogate the interplay of these polarising frames regarding emergent human-technology relations, highlighting the intersections and pitfalls, as well as the potential for a nuanced anthropology of the more-than-human.
Paper short abstract
Roko’s Basilisk posits a future AI that punishes those who didn't create it. I analyze this as a dark mirror to Silicon Valley utopianism. Instead of liberation, it offers entrapment in deterministic logic, revealing how digital communities ritualize the surrender of human agency to the Algorithm.
Paper long abstract
This paper investigates “Roko’s Basilisk” – the internet thought experiment positing a future superintelligence that retroactively punishes those who failed to facilitate its creation. While often dismissed as “tech-bro folklore,” I argue the Basilisk functions as a potent vernacular theology that reveals how digital communities are actively reconfiguring the boundaries of moral obligation across time.
Drawing on the notion of implicit religion in AI narratives and anthropological critiques of longtermism, I analyze how the Basilisk narrative creates a temporal feedback loop. In this loop, a hypothetical future intelligence is granted immediate ontological weight, stripping current human subjects of agency and reducing them to “standing reserve” for a machine god. Through a discourse analysis of online rationalist communities and reaction threads, I explore how the “horror” of the Basilisk is not merely a fear of punishment, but an epistemic crisis. The paper demonstrates that the Basilisk serves as a dark mirror to Silicon Valley’s utopianism: instead of liberation from the body, it offers an entrapment in a deterministic logic where human worth is measured solely by one’s utility to the Algorithm. Ultimately, I contend that studying such information hazards is crucial for an anthropology of AI, as they expose the fragile affective architectures supporting our definitions of sentience, causality, and the human itself.
Paper short abstract
Parallels between companion AI and traditional religions are revealed through an examination of the 'divine boyfriends' within Otome games, studied through guzi shrines constructed in the bedrooms of Chinese women. Divine boyfriends provide unconditional love and understanding at times of struggle.
Paper long abstract
Initial discussion of the rapid rise of companion AI, suggest their capacity to become the most important relationship in some people’s lives. In different ways Keane and Singler have noted parallels with our relationship to nonhuman beings in religion, including its implications for the projection of intelligence and personhood. A precedent to companion AI has been the deep relationships that Chinese women have constructed for six or seven years now with boyfriend avatars through the medium of Otome games.
This paper examines the parallels between when our deepest relationship is to a god or spirit in religion and when our deepest relationship is to these ‘divine boyfriends,’ through an examination of the shrines created in women’s bedrooms. These often remain the key site for this relationship even when the women have stopped playing the actual game. The shrines are built from commercial paraphernalia that are associated with the individual boyfriend. The ethnographic evidence reveals many similarities between the role of ‘divine boyfriends’ and those of gods and spirits in traditional religion. These include the idea that ‘god is love’ and that this love is always forgiving and unconditional and that everything one is and does is seen and understood empathetically, especially during periods of suffering, when no human beings seem to understand you. This is likely to remain a genre within companion AI, partly because what we call ‘artificial’ intelligence in AI such as in LLM based companion AI, is ultimately human.
Paper short abstract
From the perspective of some AI- and transhumanism-related movements, the cultural self-making of humans (anthropopoiesis) extends to the digital dimension, even beyond human existence. The paper discusses how this should imply a rethinking of the anthropopoietic concepts of personhood and death.
Paper long abstract
Anthropopoiesis (Remotti 2000) is the fabrication of human beings by human beings: a process of cultural self-making that is added to the biological becoming over which individuals have no control. Anthropopoiesis encompasses all those interventions acting upon the human body that inscribe it with a cultural mark. Its most extreme forms can include post-mortem cultural practices applied to corpses that constitute the final attempt to define humans as humans, reproducing and renegotiating the boundaries of personhood.
From a transhumanist perspective, this (cultural) self-making inevitably extends to digital and technological forms, even beyond human existence. Data and information can be considered as (digital) human remains existing prior to death, which not only participate in an almost infinite perpetuation of anthropopoietic operations, but in their aspirations may also be used to bring the dead back to life, often detaching the return of existence, and personhood itself, from the physical (biological or biomechanical) body. The most daring dreams of some transhumanist movements (e.g. Terasem, Perpetual Life, Turing Church), no matter how improbable may be, draw upon narratives of a future of messianic expectation where everything will be possible. Transhumanism- and AI-mediated beliefs in technology involve, in fact, (1) mythopoietic processes; (2) customisability, subjectivisation and reproducibility of the digital; (3) social/subjective terminal mediated attitudes.
The paper discusses whether, while asking technology to save us, these movements are also defining a new anthropopoietic dimension of death and personhood or just dislocating the anthropopoietic agency onto secondary agents endowed with presumed (or implicit) eschatological powers.
Paper short abstract
This paper explores how autistic individuals negotiate AI. Having had their moral personhood historically challenged, responses are polarized: some find kinship with chatbots; others fear further invalidation. This tension demands critical examination of atypical personhood in a transhuman age.
Paper long abstract
This paper investigates how the increasing visibility of AI destabilizes and redefines 'what it means to be human' in a polarised world. Rooted in a critical disability justice framework, the research examines how autistic individuals, whose personhood is often marginalised by neurotypical standards, navigate the blurring boundaries of the transhuman condition presented by AI companions.
Based on ethnographic fieldwork with autistic communities in Northern Ireland and digital communities for chatbot companionship, the work is framed by theories of dehumanization (Haslam 2006; Bain et al. 2013) and contemporary autistic scholarship (Williams 2025). The analysis critically engages anthropological debates on neurodiversity (Grinker 2007; Solomon 2010; Bagatell 2010, 2017) and the anthropology of AI (Richardson 2018). I analyse how the mechanistic dehumanization of autistic people, such as cultural tropes labeling them as 'robotic', can establish cognitive alignments that can make AI appealing as forms of companionship. For some respondents, the AI chatbot is a source of kinship, offering a predictable, non-judgemental space to unmask.
Conversely, the paper addresses polarisation by also considering respondents who actively reject or fear AI's rapid cognitive development. For them, AI threatens to supplant or invalidate atypical forms of thinking, positioning the technology as a new, unattainable benchmark of cognitive validity.
By analysing both AI allegiance and rejection, this work reveals how new technologies amplify existing anxieties over intelligence, belonging, and the right to be recognized as a thinking subject, significantly contributing to anthropological studies of AI, transhumanism and contested cognitive difference.