Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenor:
-
Gui Heurich
(UCL)
Send message to Convenor
- Format:
- Panel
- Sessions:
- Friday 10 June, -
Time zone: Europe/London
Short Abstract:
This panel will explore the intersections between anthropology and computer programming by looking, on the one hand, at ethnographies of data, algorithms, and coding, and on the other hand, by exploring how anthropologists themselves have used or could incorporate programming in their research.
Long Abstract:
Artificial intelligence (AI) evokes images of robots and sentient machines which are not yet a reality. However, AI is already a reality as machine learning algorithms that power applications and devices that people interact with on a daily basis. Search engine suggestions, mapping and route finding, social media ads, music recommendation, smart cameras and devices are all powered by such code. As such, one could argue that AI is code. In this panel, we will ask: how can anthropology engage with the practice of coding/programming?
We would like to invite scholars to explore the many ways in which anthropology could understand the role of programming, machine learning, and artificial intelligence in social life. On the one hand, we welcome ethnographic and anthropological analysis of data, algorithms, and programming in any social context. On the other hand, we would also like to invite anthropologists who have knowledge of programming or that have used programming scripts in their research, be it for statistical analysis, data visualization, or any other research practices. Combined, these two perspectives will give us a variety of contexts in which to explore the intersections between anthropology and computer programming, thus creating a critical understanding of how artificial intelligence might shape (in fact, program) the future of our discipline.
Accepted papers:
Session 1 Friday 10 June, 2022, -Paper short abstract:
This paper addresses the participation in coding as an ethnographic and autoethnographic approach for researching programming processes.
Paper long abstract:
My research is positioned between cultural anthropology and computer science in a number of ways. For my PhD project in cultural anthropology, I work on the processes through which software is written for humanoid robots in a team of computer scientists who program such robots to play soccer. My methodological approach is both ethnographic and autoethnographic, as I am simultaneously pursuing a Bachelor’s degree in computer science and have joined this team as an active member. I thus participate in programming the software for the robots with the computer scientists I conduct ethnographic research with.
This paper will address my experiences of gaining ethnographic insights of coding, and of coding for ethnographic insights. It will focus on methodological reflections of my positionality, and how participating in the coding processes provided me with a deeper of my research topic that would otherwise have been hard to gain. It will also explore my trajectory of becoming a computer scientist and a programmer of humanoid robots during fieldwork.
Paper short abstract:
This paper explores two Romanian sites - an informal coding school and a start-up of front-end programming automation - as pedagogical levers for the recalibration of abstraction, dwelling on the uneven cognitive formatting of humans and machines in an outsourcing-based coding economy.
Paper long abstract:
Dreaming of becoming the East European Silicon Valley, Cluj-Napoca (Romania) hosts a growing IT industry that thrives as an outsourcing market in constant need of cheap labor. Information technology, the engine of local creative economies, has become key to personal as well as urban development. This paper explores two IT sites in Cluj for the paradoxes they reveal about contemporary concatenations between knowledge, technology and economy: an informal school offering IT classes geared towards professional reconversion and a start-up working in the area of front-end programming automation. In the first case, participants drawn in by the compelling mirage of well-paid IT jobs strive to become initiated in the basics of algorithmic thinking and computer programming. In the second, developers and tech visionaries aim to provide a “mental exoskeleton” for creative workers in the shape of an AI powered, collaborative platform for the design of user interfaces. I study these contexts in an ethnomethodological vein, but I analyze them through the lens of Marx’s Fragment on Machines (arguing implicitly for the need to consider both ethnomethodological and Marxist roots of STS). Invested with famous optimism in the postoperaismo tradition as well as in recent proposals of postcapitalism and accelerationism, the Fragment has provoked much debate about the shape of value, but less so about the shape and distribution of knowledge as abstraction. Approaching these two cases as pedagogical levers for the recalibration of abstraction allows me dwell on the uneven cognitive formatting of humans and machines in an outsourcing-based coding economy.
Paper short abstract:
This paper explores the intersection between humans and machines by comparing the logic of nudging users towards making preferred choices on web-interfaces, and a class of deep-learning AI frameworks called Generative adversarial networks, that fool a trained neural network into making poor choices
Paper long abstract:
A key design paradigm for contemporary user-interfaces involves 'nudges' - techniques of getting users to make the choice that the designer intends them to make. An example is the 'next episode' button on Netflix which auto-clicks after a few seconds, nudging users towards a binge-watching experience. The condition of possibility for this design framework is the vast quantity of data generated at the edge of by users. Such data, I suggest, contain within them, objectified traces of what Malinowski referred to as the 'imponderabilia of actual life' - the raw material for anthropological knowledge. I argue that the approach of data scientists and AI engineers hews very closely to anthropological notions of empirical actuality. Contemporary AI models are predicated upon machines learning from vast quantities of data, classified or labelled by humans. By poring over this data, the neural network model 'learns' the rules of classification that are implicitly expressed in such labelled data, and formulates rules to generate classifications of new objects that were not present in the training data set. A specific class of AI frameworks, called Generative Adversarial Networks or GANs, seek to 'fool' such trained neural-network models into making gross errors of classification by manipulating the way neural networks compute classification probabilities. I suggest that a fruitful comparison may be made between the practice of nudging and that of the generative adversarial networks, where human and machine are flattened out on a single plane. This approach, I argue, affords new ways of thinking about human-AI ethics.
Paper short abstract:
One of the perspectives that can give anthropologists insight into the workings of AI is to treat the coding that creates machine learning algorithms as a linguistic or semiotic process. Coding can be seen as a self-reflexive form of ‘semiotic labor’ performed by both humans and machines.
Paper long abstract:
One of the perspectives that can give anthropologists insight into the workings of AI is to treat the coding that creates machine learning algorithms as a linguistic or semiotic process. Colloquially, computer code is said be ‘written’ in a particular ‘language’ and as such, is said to have ‘syntax’ and levels of ‘semantic encoding’. In the twentieth century, attempts to create artificial intelligence through the modeling of ‘expert systems’ by focusing on this symbolic-referential aspect of language, were largely unsuccessful. Of more relevance to linguistic anthropology are the metapragmatic and indexical functions of language, which require social context for meaning to emerge. Similarly, the current focus in AI on machine learning and neural networks has seen a shift away from symbolic models of AI towards emergent, self-reflexive models where meaning is understood to be contingent on context. The metapragmatic features of language, that is, the ability to speak about language using language, find parallel in the code of machine learning algorithms that have been explicitly written to be able to change their own code. Algorithms ‘learn’ from specific instances of a program running much like human speakers learn from social context what is appropriate, or not, to say. Algorithms then reflexively tweak their own code by adjusting weights or parameters in the code itself, which in turn changes their subsequent performances. In this perspective, coding can be seen as a form of ‘semiotic labor’ performed by both humans and machines through a continuous process of self-correction and adaptation.