Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenor:
-
Gui Heurich
(UCL)
Send message to Convenor
- Format:
- Panel
- Sessions:
- Friday 10 June, -
Time zone: Europe/London
Short Abstract:
This panel will explore the intersections between anthropology and computer programming by looking, on the one hand, at ethnographies of data, algorithms, and coding, and on the other hand, by exploring how anthropologists themselves have used or could incorporate programming in their research.
Long Abstract:
Artificial intelligence (AI) evokes images of robots and sentient machines which are not yet a reality. However, AI is already a reality as machine learning algorithms that power applications and devices that people interact with on a daily basis. Search engine suggestions, mapping and route finding, social media ads, music recommendation, smart cameras and devices are all powered by such code. As such, one could argue that AI is code. In this panel, we will ask: how can anthropology engage with the practice of coding/programming?
We would like to invite scholars to explore the many ways in which anthropology could understand the role of programming, machine learning, and artificial intelligence in social life. On the one hand, we welcome ethnographic and anthropological analysis of data, algorithms, and programming in any social context. On the other hand, we would also like to invite anthropologists who have knowledge of programming or that have used programming scripts in their research, be it for statistical analysis, data visualization, or any other research practices. Combined, these two perspectives will give us a variety of contexts in which to explore the intersections between anthropology and computer programming, thus creating a critical understanding of how artificial intelligence might shape (in fact, program) the future of our discipline.
Accepted papers:
Session 1 Friday 10 June, 2022, -Paper short abstract:
By observing AI training courses ethnographically, I aim to understand how students develop their mastery of the specific inscriptions (Latour 1985) of AI. I will argue that diagrams (Mackenzie 2017) are at the core of the pedagogical practices of these courses.
Paper long abstract:
In his recent book, Florian Jaton demonstrates that ethnographies of AI labs are needed to understand what is at stake in the "progressive constitution of algorithms" (Jaton 2020).
In my PhD research, I radicalise his invitation by observing AI training courses ethnographically to understand how students develop their mastery of the specific inscriptions (Latour 1985) of AI. I will argue that diagrams are not only central to the development of algorithms (Mackenzie 2017): they are also at the core of the pedagogical practices of these training courses. Teachers use diagrams to enable their students to develop a geometric and intuitive understanding of how AI works, which largely dominates their practical work. Students often seem to carry out programming operations for the sole purpose of reworking the shape of the diagrams they are constantly producing, until they come closer to an expected shape, taken as a sign of the algorithm's efficiency. While students are required to justify their operations in mathematical terms, they often only rigorously grasp the math behind their work after having retrospectively investigated their own « diagrammatic » operations. Students thus assimilate while writing their reports, habits of veridiction that mask the reality of algorithmic design.
I will draw on an ethnographic study of two AI master’s degree trainings courses of a major French University, that I conducted during a whole semester, from September 2021 to January 2022. In this context I accumulated nearly 100 hours of observation, following four classes, and attending several student work sessions.
Paper short abstract:
In this paper, I lay out the contours of an idiographic data practice for computational anthropology. Expanding on a recent engagement with AI-related conrtroversies on Wikipedia I try to make clear how coding and computation in an online fieldwork setting can be distinctively non-nomothetic.
Paper long abstract:
In this paper, I lay out the contours of what I tentatively call an idiographic data practice for computational anthropology. Expanding on a recent engagement with AI-related conrtroversies on Wikipedia I try to make clear how coding and computation in an online fieldwork setting can take on a flavour that is distinctively different from what is typically associated with nomothetic computational social science. When deployed as part of a descriptive and explorative endeavour that rests on ethnographic tenets like the ability to discover new questions from the setting and reformating research problems in the field, Python libraries for scraping data, interacting with APIs, and conducting natural language processing or other machine learning tasks cannot be put to their best if they are understood as instruments in an essentially quantativist and predictive toolbox. Questions about explainability, efficiency, or reliability fade into the background in favour of questions about adaptability and how to tailor uniquely adequate solutions for empirical situations that are constantly changing. Based on examples from my own work with a large Wikipedia corpus I therefore propose a set of principles for what an idiographic data practice in computational anthropology could aspire to and be judged on.
Paper short abstract:
In this paper I ask how the objects of machine intelligence reveal the “dialogicality” of the social worlds from which they are drawn. I develop a method to subject these objects to ethnographic scrutiny, to reveal the means through which machine intelligence constructs its objects of knowledge.
Paper long abstract:
In this paper I ask how the objects of machine intelligence reveal, upon ethnographic scrutiny, the “dialogicality” of the social worlds from which they are drawn. I subject these objects to scrutiny, as a method, to reveal the means through which machine intelligence constructs its objects of knowledge. I treat the process of producing objects of knowledge through machine intelligence as a “text-artifact” which applied machine learning researchers “decontextualize” as they produce knowledge. They transform—I argue—the text-artifact of data, algorithms, classifications, and predictions into purely “denotational” objects while eliding how the text-artifact “was originally laid down, or sedimented, in the course of a social process” (Silverstein and Urban 1996, 5). Social material-discursive practices are entextualized through the data collection and processing practices of machine intelligence. They are taken to stand in “for” the phenomena they purport to represent, and are “reanimated” through data performances that shift the ontological claims of that which is performed. This process is particularly evident when applied to natural language processing techniques, as I demonstrate in this paper, but I also propose that this process also applies to non-linguistic applications of machine intelligence.
Paper short abstract:
We discuss issues arising from applying natural language processing and data science methods to search and analyse the collection of ethnography curated by the Human Relations Area Files, Yale University. In particular we examine how comparative research might be better enabled and pitfalls avoided.
Paper long abstract:
We discuss issues arising from applying natural language processing and data science methods to assist search and analysis of the largest online collection of ethnography, curated by the Human Relations Area Files (HRAF) at Yale University. In particular, we examine how comparative research might be better enabled and pitfalls avoided, and how eHRAF, and other online resources, can assume some level of interoperability so that research and practitioner communities can combine and utilise online data tools from different sources. iKLEWS (Infrastructure for Knowledge Linkages from Ethnography of World Societies) is a HRAF project funded by the US National Science Foundation. iKLEWS is developing semantic infrastructure and associated computer services for a growing textual database of ethnography (eHRAF World Cultures), presently with roughly 750,000 pages from 6,500 ethnographic documents covering 360 world societies over time. The basic goal is to greatly expand the value of eHRAF World Cultures to students and researchers who seek to understand the range of possibilities for human understanding, knowledge, belief and behaviour, including research for real-world problems we face today, such as: climate change; violence; disasters; epidemics; hunger; and war. Understanding how and why cultures vary in the range of possible outcomes in similar circumstances is critical to improving policy, applied science, and basic scientific understandings of the human condition in an increasingly globalised world. Moreover, seeing how others have addressed issues in the recent past can help us find solutions we might not find otherwise.