Log in to star items.
Accepted Paper
Paper short abstract
In July 2025, Elon Musk owned “Grok” AI chatbot declared itself as “MechaHitler”. Using Grok as a paradigmatic case and presenting “prompt ethnography” as a critical methodology, this paper highlights right-wing discursive power emerging at the intersection of social media and conversational AI.
Paper long abstract
In July 2025, Elon Musk owned “Grok” AI chatbot declared itself as “MechaHitler” and started spewing antisemitic content and unabashed attacks on what it deems as “woke” ideas. Updates to the model that the company had just then implemented had realigned the model to abandon some of the central principles of content moderation, prodding it to shed inhibitions and give out “politically incorrect” responses “if they are factual” (Belanger 2025). The updates conformed to the original vision for the model that its owner had publicly declared on various occasions. In February 2024, announcing the new model on X (Twitter), Musk cited the model’s output to state that its mission is to “Roast the whole idea of ‘content moderation’. Be vulgar and sarcastic” (Musk 2024). Using Grok as a paradigmatic case, this paper scrutinizes right-wing incursions of generative AI. Presenting the multi-sited methodology of “prompt ethnography”, which builds on digital ethnography, longstanding ethnographic principles and critical theory, the paper highlights novel forms of right-wing discursive power emerging at the intersection of social media, value alignments of conversational AI, and prompt-based popular participations with chatbots. The paper demonstrates that what AI models give out emerge from political choices around training data and value alignment rather than embodying a vague and abstract conception of “human communication”—a term that often euphemizes deliberate decisions that lie underneath.
Anthropology of Artificial Intelligence and Oppression
Session 1