Log in to star items.
- Convenors:
-
Sahana Udupa
(LMU Munich)
Kaarina Nikunen (Tampere University)
Send message to Convenors
- Formats:
- Panel
Short Abstract
The development and deployment of AI produce both corporeal and immaterial harms—abuse, extreme speech, surveillance and even lethal physical violence. This panel shows how AI expands within specific contexts of use and institutional spaces, enabling various forms of violence and exclusion.
Long Abstract
The epistemological, material, and historical configurations of power that shape the development and deployment of artificial intelligence technologies (AI) produce both corporeal and immaterial harms—from abuse and extreme speech to regimes of surveillance, and in some cases, lethal physical violence. AI not only perpetuates harms through biased training data, but it is empowering a new phase of war, invasion and killings through automated attacks, target killings and sensor-to-shooter machine learning infrastructures (Goodfriend 2025). Generative AI is increasingly enlisted for disinformation campaigns and war propaganda, even as conversational AI models become available for common users to elicit incendiary content with tactful prompting (Udupa 2024; Canals 2023). Governments have instituted invasive data-driven systems of surveillance, aimed at immigrants, political dissenters and citizens more broadly (Nikunen and Valtonen 2024). AI chatbots that offer “companionship” are also used for abusive gendered relationships (Barassi, forthcoming). At the same time, AI technologies build upon extractive labour, data and environmental relations which particularly affect global Souths (Ricaurte 2022; Shakir, Png and Isaac 2020). The epistemologies that undergird corporate AI ignore pluriversal values, designing the model upon the paradigm of instrumental rationality (Mhlambi 2020). This panel critically explores how AI has been used in warfare, border regime and in practices of extreme speech and deception, even as it reflects and perpetuates longer patterns of epistemic, labor and environmental injustice. Rather than simply claiming that AI as a technology is oppressive, the panel shows through a rich compilation of ethnographic inquiries across the globe, how AI expands within specific institutional spaces, value regimes, and contexts of adoption, enabling various forms of violence and exclusion. The panel will also explore efforts to challenge and rewrite AI through decolonial, anti-capitalist, feminist struggles and the resonances of radical futurities.
Accepted papers
Session 1Paper short abstract
This paper provides an ethnographic account of how Artificial Intelligence (AI) is corroding the ethical landscape of militarism, taking Israeli intelligence units' embrace of AI-assisted targeting systems during the Israeli military's two year bombardment on Gaza as a case study.
Paper long abstract
This paper provides an ethnographic account of Artificial Intelligence's (AI) impact on the ethical landscape of warfare, taking Israeli intelligence units' embrace of AI-assisted surveillance and decision support systems during the Israeli military's two year bombardment on Gaza as a case study. My research draws from interviews with veterans of Israeli intelligence units and is in conversation with foundational social theorists' who offered early warnings of integrating AI into warfighting, from Hannah Arendt to Norbert Wiener. As such, I chronicle how the military's use of big data and machine learning systems to surveil, target, and kill corrodes soldiers’ capacities for ethical decision-making and moral reasoning in wartime. My research shows how the stack of algorthmic systems mediating military operations distances soldiers from the immediate violence of warfare. Military personnel tasked with coordinating airstrikes access the battlefield through key word search queries and algorithmically translated transcripts. In turn, killing unfolds ever more rapidly at an increasingly larger scale, resulting in skyrocketing civilian casualties. As Western militaries are investing record amounts of capital and labor to develop and deploy similar systems, my writing situates Israel as a critical case study to apprehend the de-skilling effects military AI systems.
Paper short abstract
Based on ethnography in southern France, this paper shows how AI becomes a moral and political signifier in everyday talk to test belonging, code distrust, and normalize exclusion. Despite limited use, it shows how AI imaginaries reproduce oppression through an "ordinary ethics" and extreme speech.
Paper long abstract
Drawing on long-term ethnographic research in a small town in southern France, this paper examines how AI becomes entangled with everyday forms of political exclusion and “soft” authoritarianism, not through spectacular technologies of warfare or surveillance, but through mundane practices of discourse, suspicion, and moral evaluation. I show how AI circulates as a floating signifier of both modernity and threat, invoked in conversations about immigration, national decline, misinformation, and social disorder, even in contexts where concrete AI adoption remains limited. In these interactions, AI talk becomes a way of probing political alignment and testing moral boundaries, resonating with what Udupa (2024) describes as the infrastructures of extreme speech, where ambiguity and coded language enable exclusionary politics while preserving plausible deniability. In a town in which AI is not yet widely accessed, rather than treating AI as a fait accompli, the paper traces how AI imaginaries-in-the-making have begun to coalesce as tools for politicizing technology, legitimizing hierarchies of credibility, and naturalizing authoritarian discourse: from café conversations in which residents invoke AI-generated “fake news” to express distrust of immigrants or journalists, to Telegram exchanges in which suspicions about chatbots code distrust of the "establishment," rooted in anxieties about national decline, deception, and moral authority. Expanding the anthropology of AI beyond sites of high-tech governance or algorithmic surveillance, the paper shows how ordinary people participate in the reproduction of oppression through AI talk and an "ordinary ethics" (Lambek 2010) in which fears of deception, automation, and manipulation become resources for exclusionary politics.
Paper short abstract
In July 2025, Elon Musk owned “Grok” AI chatbot declared itself as “MechaHitler”. Using Grok as a paradigmatic case and presenting “prompt ethnography” as a critical methodology, this paper highlights right-wing discursive power emerging at the intersection of social media and conversational AI.
Paper long abstract
In July 2025, Elon Musk owned “Grok” AI chatbot declared itself as “MechaHitler” and started spewing antisemitic content and unabashed attacks on what it deems as “woke” ideas. Updates to the model that the company had just then implemented had realigned the model to abandon some of the central principles of content moderation, prodding it to shed inhibitions and give out “politically incorrect” responses “if they are factual” (Belanger 2025). The updates conformed to the original vision for the model that its owner had publicly declared on various occasions. In February 2024, announcing the new model on X (Twitter), Musk cited the model’s output to state that its mission is to “Roast the whole idea of ‘content moderation’. Be vulgar and sarcastic” (Musk 2024). Using Grok as a paradigmatic case, this paper scrutinizes right-wing incursions of generative AI. Presenting the multi-sited methodology of “prompt ethnography”, which builds on digital ethnography, longstanding ethnographic principles and critical theory, the paper highlights novel forms of right-wing discursive power emerging at the intersection of social media, value alignments of conversational AI, and prompt-based popular participations with chatbots. The paper demonstrates that what AI models give out emerge from political choices around training data and value alignment rather than embodying a vague and abstract conception of “human communication”—a term that often euphemizes deliberate decisions that lie underneath.
Paper short abstract
Generative AI renders transgression as a reproducible visual style while detaching it from political risk and agency. The paper presents the outcomes of a graphic design workshop that tested queer materialist art practices as a way to reintroduce contingency and friction into AI-driven workflows.
Paper long abstract
Contemporary generative AI systems increasingly participate in the production of visual culture, reshaping how differences are represented and globally circulated. While these systems are often discussed in terms of bias, surveillance, or automation, less attention has been paid to the aesthetic regimes through which they translate historically transgressive forms of embodied aesthetic practice into statistically reproducible outputs. This paper examines how generative AI converts transgression into a visual effect that appears disruptive while remaining detached from political risk, material conditions, and situated agency. The concept of synthetic transgression is proposed to describe this process. Rather than challenging normative orders, AI-generated images stabilise them by converting embodied practices of excess and refusal into optimised styles. Through abstraction, embedding, and probabilistic recombination, difference becomes a pattern to be managed rather than a relation capable of intervening in social arrangements. In this sense, synthetic transgression functions as anticontingent depoliticisation, preserving the appearance of disruption while excluding the contingency through which transgression acquires political force. The paper then presents material from experimental workshops with graphic design students at the Royal Academy of Art, The Hague (KABK). Prompting was treated as an interface of mediation and power, and hybrid workflows transformed AI-generated texts and images into collectively produced, self-published zines. Within this setting, a queer materialist orientation toward graphic design was proposed and explored through practice, reintroducing manual labour and collective authorship and relocating outputs within situated processes of material production.
Paper short abstract
AI in Morocco's border surveillance reveals harms: lethal drone violence causing deportations, disinformation vilifying migrants. Reinforces colonial legacies, epistemic injustices marginalizing values, racialized oppression. Highlights decolonial resistance via open-source tools.
Paper long abstract
This paper examines the deployment of artificial intelligence (AI) technologies in Morocco's border control and surveillance systems, drawing on ethnographic fieldwork in border regions and urban centers like Rabat and Tangier. Building on the panel's focus on epistemological, material, and historical power configurations shaping AI, the analysis reveals how AI perpetuates corporeal and immaterial harms, targeting migrants, political dissenters, and marginalized communities. Morocco's integration of AI in automated border surveillance—such as facial recognition and predictive analytics for migrant interception—mirrors global extractive data practices exploiting labor and environmental resources in the global South, while reinforcing colonial legacies from French and Spanish influences. Through in-depth interviews with migrants, activists, and border officials, the paper illustrates how AI-enabled systems facilitate lethal violence, including automated drone surveillance leading to deportations and drownings in the Strait of Gibraltar, and amplify disinformation campaigns vilifying sub-Saharan migrants as threats. These technologies perpetuate gendered and racialized oppression, akin to abusive AI companionship models, and entrench epistemic injustices by prioritizing instrumental rationality over pluriversal values rooted in Amazigh and Arab-Islamic epistemologies. Yet, the paper highlights decolonial and feminist struggles resisting AI oppression, such as grassroots Moroccan initiatives using open-source tools to subvert surveillance data and advocate for data sovereignty. Situating Morocco within broader AI warfare infrastructures, this ethnographic inquiry contributes to rewriting AI through radical futurities, emphasizing how local adoption enables violence while offering anti-capitalist reclamation pathways. It underscores anthropological perspectives to dismantle AI's role in perpetuating historical injustices in North African contexts.
Paper short abstract
This paper explores the implications of AI in border practices in Finland. Based on interviews with migrants and migration officials it shows how AI is contextually bound and shaped by the national policies - furthering slow violence by categorizing migrants to deserving and undeserving.
Paper long abstract
This paper critically explores the implications of automation of border practices in the context of Global North.
The paper is based on qualitative ethnographic research among migration management and asylum seekers in Finland. The paper argues that while the implementation of AI in border regime is driven by aspirations and imageries of rationalization and efficiency, in practice it creates a complicated temporal inequality between deserving and undeserving migrants. Automation of borders aims at efficiency, however, time is not the same for all. Instead of streamlined efficiency, migrants experience prolonged waiting, unpredictable delays, and shifting timelines that are difficult to contest or even comprehend. Rather than simply accelerating bureaucratic processes, automation restructures time itself — imposing new rhythms, pauses, and deferrals that disproportionately burden migrants.
The study further shows how the use of AI is contextually bound and shaped by the cultural values and national policies - further categorizing migrants to deserving and undeserving. This is manifested through the concept of slow violence (Nixon 2013) that illustrates the way AI in migration management operates as accumulative and hidden rather than spectacular power of inequality. The research makes visible that automation does not erase bureaucratic violence but reconfigures it—rendering uncertainty, delay, and opacity ordinary features of everyday life under migration management.
Paper short abstract
This paper examines how Turkey's sovereign AI initiatives, including national LLM development, may reproduce the epistemological and material power configurations they claim to resist, while also generating frictions that open space for alternative futures.
Paper long abstract
As states increasingly position artificial intelligence as a matter of national strategic priority, "sovereign AI" has emerged as a compelling discourse promising technological self-determination and autonomy from dominant corporate and geopolitical actors. Drawing on ethnographic research and policy analysis in Turkey, this paper critically examines how national AI governance frameworks—ostensibly designed to assert sovereignty—may paradoxically reproduce the very epistemological and material configurations of power they claim to resist.
Through analysis of policy documents, institutional deliberations, and fieldwork focusing on TÜBİTAK's Turkish large language model development project, this study traces how sovereign AI initiatives navigate tensions between aspirations for autonomy and the structural dependencies embedded in global AI supply chains, training data regimes, and computational infrastructures. The paper argues that sovereign AI frameworks often adopt the instrumental rationality and techno-solutionist logics of hegemonic AI development, even as they rhetorically contest Western technological dominance—creating institutional spaces where state-led AI expansion can enable new forms of surveillance and epistemic closure under the banner of national interest.
Yet the paper also attends to generative frictions within these processes—moments where local actors contest or redirect sovereign AI agendas toward more pluralistic ends. By situating Turkey's AI governance trajectory within broader patterns of technological nationalism in the global South, this contribution illuminates how sovereignty claims simultaneously challenge and reinscribe algorithmic power.
Paper short abstract
Uzbekistan is rapidly “AIfying” via biometrics, Safe City, and data-localisation. I trace how these systems reorder power—at borders, on platforms, in daily services—producing data borders and gendered harms, and sketch decolonial, feminist interventions for plural Uzbek AI futures.
Paper long abstract
Uzbekistan is rapidly institutionalising AI as state infrastructure: a national AI strategy to 2030, Digital Uzbekistan-2030 targets, and AI-friendly investment regimes—including a tax-free zone for AI and data centres in Karakalpakstan—signal accelerated “AIfication.” These sit atop earlier projects: Safe City deployments with Huawei’s video analytics/facial recognition in Tashkent and beyond, and biometric identity stacks (MyID facial/palm recognition) used across banking, metro access, and e-services, all under a 2019 data-localisation law. Efficiency is promised; surveillance, enclosure, and extractive data relations expand. This paper asks how AI’s expansion reorders power at the edge: for women navigating biometric portals; for migrants moving through “data borders” within e-government; for activists and creators whose visibility is measured, flagged, or throttled; and for communities in Karakalpakstan, where data-centre siting meets long histories of environmental harm around the Aral Sea. Methodologically, I combine policy/technical analysis (MyID architectures, data flows, model procurement), multi-sited ethnography (IT Park/start-ups, ministries, platforms, salons and streets in Tashkent), and participatory futures workshops with civil society to prototype plural, feminist, decolonial AI counter-visions. I argue that Uzbekistan’s AIfication does not merely “apply” global AI; it adopts and adapts it through specific legal, infrastructural, and moral orders (data localisation, Huawei-built Safe City, biometric rails), producing data borders that sort citizens and sensorial regimes that normalise watching. I conclude with practicable interventions—impact assessments with community veto points, red-team audits for biometric harms, and Karakalpakstan-first environmental standards—to open room for Uzbek AI futures beyond extractive instrumental rationality.