Log in to star items.
- Convenors:
-
Anna (chosen Name Zora) Ritz
(University of Bern)
Alexandra Deem (Ca' Foscari University)
Send message to Convenors
- Formats:
- Panel
- Networks:
- Network Panel
Short Abstract
Visual culture is increasingly machine-centric: images interpreted, classified, and created by machines. This panel examines how computer vision and AI image generation create, reproduce, and reinforce socio-political polarizations, urging Visual Anthropology to critically engage with them.
Long Abstract
Over the past decade, a fundamental transformation of visual cultures, and especially of human-machine-image entanglements, has occurred. They are more and more shaped by algorithms and neural networks. Digital images are no longer just seen; they are indexed and operationalized by machines through rigid classifications that often reflect and reinforce existing power structures. At the same time, synthetic AI-generated images circulate with unprecedented speed and scale, amplifying misinformation and reconfiguring visual authority in political and social life.
This panel explores how machine vision systems—encompassing both computer vision (recognition, classification) and generative AI (synthesis, creation, imagination)—constitute an integrated regime that is reshaping visual culture and intensifies polarization. Machine vision creates technical polarizations: computer vision fragments images into features, edges, and keypoints—decomposing unified visual fields into computational data structures. Further, it amplifies socio-political polarizations, embedding biases and asymmetries of visibility within algorithmic infrastructures.
This panel aims to reflect on the digital, material and socio-political dimensions of AI vision systems: How have human-machine-image entanglements altered? How can anthropology propose critical perspectives on computer vision and image-generation tools beyond techno-positivism and techno-determinism? How might researchers reject, subvert, or strategically deploy such tools in fieldwork and analysis? And how can interdisciplinary collaboration between Visual Anthropology, Media Studies, and Science and Technology Studies expand these debates?
By convening scholars and practitioners across these fields, this panel asks how visual anthropology can critically engage with these transformation – rethinking documentation, representation, and the politics of seeing and creating images in an era when the analysis and production of visuals are becoming post-human.
Accepted papers
Session 1Paper short abstract
Drawing on a historical overview of computer vision’s role in contemporary AI, the research traces the material conditions of its production, from platform-based gig work to BPO companies in the global South, often operating under the narrative of impact sourcing.
Paper long abstract
Computer vision is the field that has primarily driven the contemporary expansion of subsymbolic AI. One specific event is commonly cited as the beginning of this new phase: the success of AlexNet in the 2012 ImageNet competition. While AlexNet’s success was certainly grounded in the innovative architecture of the neural network, it was also the outcome of a new form of labor organization: the large-scale image annotation carried out by Amazon Mechanical Turk microworkers, one of the first online platforms to develop a strong focus on AI training.
Since then, data annotation for AI training has expanded significantly, generating not only a multitude of different platforms but also redirecting a substantial portion of the BPO industry toward the training and fine-tuning of neural networks. This production regime is often framed as impact sourcing, i.e., the practice of recruiting highly marginalized social groups with low levels of education and limited access to formal employment. Behind the narrative that portrays this production model as a vehicle for emancipation and empowerment, often lies a business strategy that exploits the limited bargaining power of these social groups and their abundant labor supply.
Building on a historical and socio-technical analysis of AI, the proposal examines the case of companies in India that recruit marginalized groups along gendered, cultural, and religious lines of division to train computer vision models. Drawing primarily on semi-structured interviews with workers, the analysis addresses the concrete material conditions under which contemporary computer vision models are produced.
Paper short abstract
The inauguration of the Ram Temple in Ayodhya on 22 Jan 2024 sparked a surge of AI-generated images on X. These affective, fictional visuals merged Hindu religiosity with digital platforms. This presentation examines their generation, circulation and resharing as "cultural imaginaries".
Paper long abstract
The inauguration of the Ram Temple in Ayodhya on 22 January 2024, led to a large-scale generation of AI-produced 'devotional' imagery on Twitter (now X). These generated images were affectively charged, fictionalised visual worlds that merged Hindu religiosity with the algorithmic infrastructures of digital platforms. Through the lens of Roland Meyer’s "platform realism" (2025), this presentation investigates the practice of generating, sharing and resharing of these images as "cultural imaginaries". I argue that generative AI models produce mythic depictions that are optimised for virality and that such practices demonstrate how Far-right religious-political movements aestheticise power through the infrastructures of generative technology.
These images crystallise into three affectively charged imaginaries that transform devotion into ideological performance: Hegemonic Masculinist Imaginaries, Restorative Imaginaries, and Techno-utopian Imaginaries. Through ethnographic research on the circulation of these images on Twitter (now X), I analyse these imaginaries using critical discourse analysis and affect theory, situating AI-generated Hindu imagery within broader debates on affect and Hindu Nationalist digital culture.
Paper short abstract
How can AI-generated images render visible the agencies of corals that lie beyond human visual thresholds? This presentation takes an experimental, posthuman approach to visual cultures of the coral holobiont, exploring what the machinic gaze might reveal or obscure.
Paper long abstract
This presentation challenges dominant representations of coral reefs by turning attention to the coral holobiont—the complex symbiotic assemblage of coral, algae, bacteria, and countless other organisms whose collaborative labor remains largely invisible to the human eye.
Working with AI image generation tools, this project explores what the machinic gaze might reveal or obscure. What regimes of visibility emerge when generative systems synthesize coral imagery from historically burdened archives? And conversely: what forms of knowledge, agency, and multi-species kinship might be embodied in images that refuse familiar aesthetics of reef photography?
This presentation engages with alternative visual languages that foreground the collective agency of corals as multispecies- technological assemblages. Instead of accurate representation, this work aims to experiment with speculative visualization—an attempt to render visible the invisible collaborations within the holobiont and to ask how such images might attune us to distributed forms of cognition and cooperation within the reef. In doing so, this posthuman endeavor positions human-AI collaboration as a site for rehearsing other ways of seeing multispecies worlds.
Paper short abstract
In my paper, I claim that with the advent of AI-generated visual content, another principle could be added to Foucault’s “heterotopology”, vis-à-vis virtual heterotopias: (ϗ) A virtual heterotopia has at least one nucleus that functions as the generator of its (a) existence and (b) Otherness.
Paper long abstract
Virtual heterotopias have a series of particularities that individualize them and at the same time place them beyond the perspectives exposed by Michel Foucault, Kit Hetherington, James D. Faubion, Henri Lefebvre, Michiel Dehaene and Lieven De Cauter. They are juxtapositions of multiple meanings. With the advent of AI-generated images, the contextual construction of space and time in virtual worlds has changed considerably. In video games, players co-create the meanings encapsulated by the game developers and synthesized with AI. Most virtual heterotopias are currently found in video games. A significant proportion of these are “role-playing games”, “adventure games”, “action games” or combinations thereof. Drawing upon anthropological research, I claim that with the advent of AI-generated visual content ,another principle could be added to Foucault’s “heterotopology”, when attempting to describe virtual heterotopias: (ϗ) A virtual heterotopia has at least one nucleus that functions as the generator of its (a) existence and (b) Otherness. In my paper, I begin by assessing several theoretical approaches to heterotopias. I continue by examining the political content from games that include virtual heterotopias. I delve into their relevance as examples of political messages that were launched in the last years (e.g. Cyberpunk 2077, Kingdom Come: Deliverance II etc.). Afterwards, I describe the principle (ϗ) using examples from my research. I argue that this principle may have applications in the case of the real-world heterotopias, since they tend to have a political dimension. My excursus is based on participative observation, interviews and other data collection techniques.
Paper short abstract
As generations Z and Alpha in Britain have increasingly engaged with AI in their daily lives, there is much that we older generations can learn from them. This paper draws from recent research with young Brits to propose a more nuanced understanding the potential of AI tools and images in society.
Paper long abstract
As AI-generated images have entered our everyday production and circulation of images, there has been a tendency in Britain – both through the media and academic studies – either to demonise these emerging technologies or embrace them as inevitable.
Responding to the fields of digital and visual anthropology in its theoretical framework, this paper draws from the findings of two recent studies I have conducted to propose a more complex picture of the way that younger generations are making sense of themselves and the world in relation to their engagement with AI technologies and images.
Firstly, I will present the results from a cohort of young people in the Yorkshire city of Bradford - how both their production of and viewership of GenAI images reveal what an everydayness of the technology can be. These young people approach GenAI with equal curiosity and scepticism. As with much of digital ethnography, we see how this group makes their own meanings from the range of digital technologies in their lives with little hype or fear.
Then, we see how an artist and team of producers are using digital art that entertains and educates young people on the potential applications of AI in surveillance technologies. Through a nationally touring art project, we see how young audiences interact with and reject an overly dystopian depiction of AI. This has led the artist to consider how AI can be used instead as a tool 'to make us more human' for deeper forms of connection.