Log in to star items.
- Convenors:
-
Laia Soto Bermant
Beth Singler (University of Zurich)
Send message to Convenors
- Formats:
- Panel
Short Abstract
This panel asks how anthropology can address the ethical, epistemological and political challenges posed by the development of complex intelligent systems that blur the boundaries between human and machine, organic and non-organic, and living and non-living beings.
Long Abstract
What does it mean to be intelligent, sentient, or self-aware in an age when machines speak, algorithms predict, and humans increasingly rely on artificial cognition to make sense of the world? As the boundaries between human and machine thought blur—and as new developments apply AI to decoding animal communication—anthropology is uniquely positioned to interrogate the moral and epistemic assumptions that have long defined the “human condition.” This panel asks whether we are now entering a transhuman condition: a moment in which intelligence itself becomes distributed, relational, and contested.
While debates in AI ethics and governance tend to treat intelligence as a measurable property, anthropological and STS perspectives reveal it as a cultural and moral construct that legitimises particular hierarchies of life and value. Yet as artificial, organic, and hybrid intelligences increasingly participate in the production of knowledge, new philosophical, legal, and ethical questions arise: who—or what—can be recognised as a thinking subject, a moral agent, or a bearer of rights?
We invite ethnographic and theoretical contributions that explore how intelligence, sentience, and personhood are enacted, contested, or reimagined in laboratories, digital environments, therapeutic settings, and everyday life. Topics may include AI–human interaction and embodiment, the use of AI in studying nonhuman communication, developments in robotics and organic AI, moral and legal recognition of nonhuman beings, or the affective and epistemic dimensions of more-than-human relations.
Bringing together anthropologists and allied scholars, this panel seeks to rethink the category of intelligence—and, with it, the very idea of the human—in light of the social, ethical, and ontological transformations brought about by artificial and nonhuman minds.
Accepted papers
Session 1Paper short abstract
Drawing on the distinct ontology of fiction and anthropological engagement with the subjunctive, this paper suggests that approaching AI systems through the lens of fiction clarifies what is counted as knowledge, authority or possibility, by foregrounding new ways of re-partitioning social worlds.
Paper long abstract
The “transhuman condition” is often framed as a crisis of definition: do machines possess intelligence, sentience, or rights? This paper shifts focus from such debates to an anthropological engagement with world-making: how people inhabit and move between different realities. Anthropology has long examined interactions with beings occupying intermediate or liminal positions (Turner 1969), such as spirits or avatars, treated as socially real without stable embodiment or fixed ontological status (Ong 1987; Boellstorff 2016). These engagements rely on social practices allowing multiple worlds to coexist without collapsing into one another.
Drawing on an account of fiction as a “re-partitioning” of reality (Pavel 1986), and anthropological work on the subjunctive (Strassler 2010; Driver 1988), this paper approaches AI systems as a form of “as if” interaction embedded in technological infrastructures. Like fictional worlds, AI systems invite users to interact as if they were addressing a conscious interlocutor. Yet, unlike fiction, these interactions operate continuously within everyday systems rather than bounded narrative domains, obscuring the cues that ordinarily signal how fictional worlds are entered and exited.
Focusing on large language models, the paper argues that the lens of fiction reveals how AI systems extend “as if” interactions into everyday life, altering how people move between possible worlds. Building on anthropological studies of non-human entities, the paper highlights a growing risk, namely that the practices through which possible worlds are ordinarily recognised are harder to sustain. On this account, “intelligence” in the transhuman era concerns sustaining workable distinctions between simulated and everyday social life.
Paper short abstract
This paper examines how people engage with algorithms through relational and moral reasoning rather than technical understanding. Drawing on ethnographic research on Douyin and WeChat, it argues that algorithmic engagements are relational practices that manage visibility and social consequences.
Paper long abstract
Drawing on ethnographic fieldwork on Douyin and WeChat in a small Chinese city, this paper examines how ordinary social media users engage with algorithmic systems through everyday moral and relational reasoning rather than treating them as purely technical tools. Through cases such as refusing to provide “not interested” feedback, avoiding or accepting acquaintance recommendations, and strategically managing algorithmic visibility, the paper shows that algorithmic engagement is directed toward specific relationships and toward the negotiation of moral concerns, cultural norms, and social risks. Artificial intelligence algorithms operate through relations and become consequential mediators. Without taking relationality into consideration, specific algorithmic practices cannot be adequately understood.
This article challenges the long-standing tendency toward methodological individualisation in the social scientific study of algorithms. It contends that algorithm research should move beyond examining interactions between individual users and technical systems, and instead treat relationality as a constitutive condition of algorithmic action rather than merely an external influencing factor. Intelligence here is not understood as a measurable property of either humans or machines, but as an emergent and contested quality enacted through social relations. By foregrounding how algorithms constitute moral and relational actors in everyday life, the paper contributes to anthropological debates on intelligence, agency, and personhood in an age of distributed and more-than-human cognition.
Paper short abstract
Does the computer think? Drawing on ethnography with remote IT workers in Gurugram, India, this paper examines how LLMs mediate the triad of morality, personhood, and intelligence. It introduces “computational anxiety” to show how these triadic properties are distributed in socio-technical field(s).
Paper long abstract
Does the computer think, and if so, what does it mean to think in a world where machine inferences increasingly participate in everyday cognition? This paper revisits long-standing philosophical debates on mind, computation, and understanding by grounding them ethnographically in contemporary practices of remote IT workers in Gurugram, India whose understandings of morality, personhood and intelligence are mediated by large language models (LLMs).
Building on earlier critiques of technological singularity, so-called mind uploads and the computational theories of mind, the paper shifts attention from speculative futures to the "mundaneness", where intelligence is already distributed across humans, algorithms, platforms, and infrastructures. Drawing on ethnography with remote IT workers, I introduce the concept of “computational anxiety,” and how it reshapes what it means to understand the traid of morality, personhood and intelligence as objects are treated like thinking subjects.
Theoretically, the paper engages debates on computation and mind by revisiting the distinction between knowing-that and knowing-how, arguing that while LLMs may never experience or understand in a phenomenological and psychoanalytic sense, they nonetheless reorganise the everyday epistomologies of understanding. In dialogue with STS and psychoanalytic approaches to Gen. AI, particularly by employing Luca Possati's "technoanalysis", I argue that intelligence is not an internal property of brains or machines, but an anxious, and socio-technical practice. Further, this dialogue unfolds how emerging technologies open new pathways for understanding “to understand” sentience by embedding computational epistemologies into everyday life. Lastly, we will rethink what it could mean for Trans/[H]umans in a post-colonial and polarised world?
Paper short abstract
My presentation interrogates discursive polarities of the posthuman. By contrasting “technocratic” with “critical” framings of the “human condition”, I show how AI discourse serves as a catalyst for renegotiations of the human subject, both along and against established epistemic orders.
Paper long abstract
The “floating signifier AI” (Lucy Suchman) acts as a catalyst for “polarising forces” regarding not only how “intelligent” technologies are developed, but also moral, political, and epistemological questions. It encompasses controversial entanglements between humans and non-humans while constituting an imaginative space where experiences of difference and ambivalence are situated within specific narrative configurations.
Crucially, the question of the “conditio humana”—and who qualifies as a sentient, intelligent subject—is being renegotiated. The “human” is framed in divergent ways: on the one hand, as “evil” or “inferior” compared to “his” machines, consequently requiring a “solutionist” approach. On the other hand—and to a certain degree complementary to this—the “human” primarily denotes the “modern,” Western, white, male subject.
The research question for my presentation is how the “critical” posthumanist question—what role humans can and will play in a more-than-human world—stands in a constitutive polarity to “technocratic” transhumanist/posthumanist framings.
Based on an analysis of how media engage with “AI” and how these narratives are received in online comment sections of news dailies in Austria, I will demonstrate how “critical” and “technocratic” framings are being enacted. I argue that an ambivalent stance toward technology is paralleled here by an almost cynical attitude regarding the role of “the human,” a dynamic which effectively reproduces and stabilizes established epistemic orders.
Focusing on the thematic complex of AI, my presentation seeks to interrogate the interplay of these polarising frames regarding emergent human-technology relations, highlighting the intersections and pitfalls, as well as the potential for a nuanced anthropology of the more-than-human.
Paper short abstract
Parallels between companion AI and traditional religions are revealed through an examination of the 'divine boyfriends' within Otome games, studied through guzi shrines constructed in the bedrooms of Chinese women. Divine boyfriends provide unconditional love and understanding at times of struggle.
Paper long abstract
Initial discussion of the rapid rise of companion AI, suggest their capacity to become the most important relationship in some people’s lives. In different ways Keane and Singler have noted parallels with our relationship to nonhuman beings in religion, including its implications for the projection of intelligence and personhood. A precedent to companion AI has been the deep relationships that Chinese women have constructed for six or seven years now with boyfriend avatars through the medium of Otome games.
This paper examines the parallels between when our deepest relationship is to a god or spirit in religion and when our deepest relationship is to these ‘divine boyfriends,’ through an examination of the shrines created in women’s bedrooms. These often remain the key site for this relationship even when the women have stopped playing the actual game. The shrines are built from commercial paraphernalia that are associated with the individual boyfriend. The ethnographic evidence reveals many similarities between the role of ‘divine boyfriends’ and those of gods and spirits in traditional religion. These include the idea that ‘god is love’ and that this love is always forgiving and unconditional and that everything one is and does is seen and understood empathetically, especially during periods of suffering, when no human beings seem to understand you. This is likely to remain a genre within companion AI, partly because what we call ‘artificial’ intelligence in AI such as in LLM based companion AI, is ultimately human.
Paper short abstract
From the perspective of some AI- and transhumanism-related movements, the cultural self-making of humans (anthropopoiesis) extends to the digital dimension, even beyond human existence. The paper discusses how this should imply a rethinking of the anthropopoietic concepts of personhood and death.
Paper long abstract
Anthropopoiesis (Remotti 2000) is the fabrication of human beings by human beings: a process of cultural self-making that is added to the biological becoming over which individuals have no control. Anthropopoiesis encompasses all those interventions acting upon the human body that inscribe it with a cultural mark. Its most extreme forms can include post-mortem cultural practices applied to corpses that constitute the final attempt to define humans as humans, reproducing and renegotiating the boundaries of personhood.
From a transhumanist perspective, this (cultural) self-making inevitably extends to digital and technological forms, even beyond human existence. Data and information can be considered as (digital) human remains existing prior to death, which not only participate in an almost infinite perpetuation of anthropopoietic operations, but in their aspirations may also be used to bring the dead back to life, often detaching the return of existence, and personhood itself, from the physical (biological or biomechanical) body. The most daring dreams of some transhumanist movements (e.g. Terasem, Perpetual Life, Turing Church), no matter how improbable may be, draw upon narratives of a future of messianic expectation where everything will be possible. Transhumanism- and AI-mediated beliefs in technology involve, in fact, (1) mythopoietic processes; (2) customisability, subjectivisation and reproducibility of the digital; (3) social/subjective terminal mediated attitudes.
The paper discusses whether, while asking technology to save us, these movements are also defining a new anthropopoietic dimension of death and personhood or just dislocating the anthropopoietic agency onto secondary agents endowed with presumed (or implicit) eschatological powers.
Paper short abstract
This paper explores how autistic individuals negotiate AI. Having had their moral personhood historically challenged, responses are polarized: some find kinship with chatbots; others fear further invalidation. This tension demands critical examination of atypical personhood in a transhuman age.
Paper long abstract
This paper investigates how the increasing visibility of AI destabilizes and redefines 'what it means to be human' in a polarised world. Rooted in a critical disability justice framework, the research examines how autistic individuals, whose personhood is often marginalised by neurotypical standards, navigate the blurring boundaries of the transhuman condition presented by AI companions.
Based on ethnographic fieldwork with autistic communities in Northern Ireland and digital communities for chatbot companionship, the work is framed by theories of dehumanization (Haslam 2006; Bain et al. 2013) and contemporary autistic scholarship (Williams 2025). The analysis critically engages anthropological debates on neurodiversity (Grinker 2007; Solomon 2010; Bagatell 2010, 2017) and the anthropology of AI (Richardson 2018). I analyse how the mechanistic dehumanization of autistic people, such as cultural tropes labeling them as 'robotic', can establish cognitive alignments that can make AI appealing as forms of companionship. For some respondents, the AI chatbot is a source of kinship, offering a predictable, non-judgemental space to unmask.
Conversely, the paper addresses polarisation by also considering respondents who actively reject or fear AI's rapid cognitive development. For them, AI threatens to supplant or invalidate atypical forms of thinking, positioning the technology as a new, unattainable benchmark of cognitive validity.
By analysing both AI allegiance and rejection, this work reveals how new technologies amplify existing anxieties over intelligence, belonging, and the right to be recognized as a thinking subject, significantly contributing to anthropological studies of AI, transhumanism and contested cognitive difference.
Paper short abstract
The paper examines how AI researchers and developers move away from direct human evaluation of AI systems toward automated, model-driven assessments, through which standards of machine competence are defined and applied, while the role of human judgment in these processes is increasingly obscured.
Paper long abstract
Efforts to make artificial intelligence (AI) systems behave in ways people find competent and acceptable are often analyzed in terms of adequate governance practices or the human labor involved. Less attention has been paid to the automation of the evaluative processes through which these efforts are realized. As expectations that AI systems perform demanding tasks at scale intensify, AI researchers and developers seek ways to evaluate performance across complex and heterogeneous tasks. Such evaluation has traditionally relied on human data and judgment, which have been foundational to the development of AI. Yet, within scaling-oriented logics, such reliance increasingly comes to be treated as a bottleneck. To bypass this constraint, evaluation and adjustment have been shifting toward approaches such as reinforcement learning from AI feedback (RLAIF) and “LLM-as-a-judge,” which enable AI systems to refine their performance, master increasingly complex tasks, and align with human needs through internalized, machine-led feedback loops. Drawing on ethnographic research conducted at a university-based computer science laboratory in Beijing and an analysis of relevant technical literature and expert discourses, this paper examines how these mechanisms produce a form of machine intelligence oriented toward continuous improvement and broad deployment and how this orientation redefines what counts as competent reasoning within AI development. At the same time, it examines how these mechanisms shift judgment from an external human activity into a hidden layer of the system itself, where standards of competence are increasingly automated and treated as intrinsic system properties, and the role of human judgment is further obscured.
Paper short abstract
Internet horror like "The Backrooms" casts AI as a sublime, alien force. I argue these stories invert transhumanist optimism: replacing enhancement with entrapment in opaque logic. As digital folklore, they rethink AI personhood and question the coherence of the "thinking subject."
Paper long abstract
Contemporary AI discourse is increasingly shaped not only by policy and technical claims, but also by vernacular genres – such as internet horror – that stage AI as an alien, sublime force. Drawing on a digital ethnography of AI-generated “Backrooms” imagery and associated discussion threads, I propose that this liminal space aesthetic functions as a tool for grasping opaque technology. By visualizing complex algorithms as endless, empty rooms, users render the opacity of AI habitable. They create a way to “walk through” the system’s alien logic – not simply to decode it, but to ritually enact the very disorientation and estrangement that the technology provokes. The paper contributes to the panel’s goal of rethinking intelligence by demonstrating how “dark” speculative aesthetics invert the promise of transhumanism. Instead of liberation or enhancement, these internet folklore materials portray a reality where both human bodies and digital spaces are rendered as “glitchy” data – trapped in an infrastructure that is indifferent, omnipresent, and unanswerable. I argue that this digital art folklore dramatize a crisis of personhood in which AI is alternately framed as monster, deity, or an otherwise alien moral actor. Furthermore, they illuminate why debates about AI consciousness repeatedly “slip” between technical abstraction and uncanny agency. By analyzing internet horror as an affective phenomenon, this paper offers anthropology a route into contemporary struggles over who or what can be recognized as a thinking subject in debates about AI.