Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality, and to see the links to virtual rooms.

Accepted Contribution:

Knowledge grabbing and the reproduction of anthropology  
Oana Mateescu (Babes-Bolyai University)

Contribution short abstract:

Defining responsible use of generative AI in academic writing cannot afford to ignore the political economy of knowledge appropriated via stochastic parroting. An anthropology that stakes its reproduction on becoming AI literate risks becoming ethically and epistemologically illiterate.

Contribution long abstract:

Soon after the launch of large language models (LLMs) such as OpenAI’s ChatGPT, the world of academia went into a justified panic that associated the advent of generative AI with the end of education, of scientific research and, generally, a “textpocalypse” (Kirschenbaum 2023). Universities began the arduous process of compiling guidelines for students’ use of AI, recalibrating standards of honesty and integrity, and bootstrapping programs of AI literacy.

My intervention is grounded in such processes – experimenting with AI tools in the classroom and participating in a department (Sociology and Anthropology) committee charged with establishing thresholds of responsible use of generative AI in social science research and academic writing. I focus on the leftovers of such bureaucratic processes and specifically the political economy of knowledge appropriated and reproduced via the stochastic parroting characteristic of LLMs. While much attention should go to the extractive labor that makes AI possible, the ownership structures that circumscribe the operation of AI are just as important. Not only is AI the property of corporations with vested profit motives, but so is the substance of knowledge (prompts and knowledge files) produced via AI-user interactions. Is the framework of intellectual property protection enough against the ethical infringements inherent in the prompting of AI with qualitative ethnographic data? Moreover, what are the epistemological and political risks for anthropology in outsourcing the interpretation of ethnographic data to AI? An anthropology that stakes its reproduction on becoming AI literate might very well end up being ethically and epistemologically illiterate.

Roundtable RT032
Artificial intelligence: the Oppenheimer moment?
  Session 1 Thursday 25 July, 2024, -