Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Engineering the unconscious: nudges, generative adversarial networks, and the imponderabilia of everyday life  
Deepak Prince (IIIT Delhi)

Paper short abstract:

This paper explores the intersection between humans and machines by comparing the logic of nudging users towards making preferred choices on web-interfaces, and a class of deep-learning AI frameworks called Generative adversarial networks, that fool a trained neural network into making poor choices

Paper long abstract:

A key design paradigm for contemporary user-interfaces involves 'nudges' - techniques of getting users to make the choice that the designer intends them to make. An example is the 'next episode' button on Netflix which auto-clicks after a few seconds, nudging users towards a binge-watching experience. The condition of possibility for this design framework is the vast quantity of data generated at the edge of by users. Such data, I suggest, contain within them, objectified traces of what Malinowski referred to as the 'imponderabilia of actual life' - the raw material for anthropological knowledge. I argue that the approach of data scientists and AI engineers hews very closely to anthropological notions of empirical actuality. Contemporary AI models are predicated upon machines learning from vast quantities of data, classified or labelled by humans. By poring over this data, the neural network model 'learns' the rules of classification that are implicitly expressed in such labelled data, and formulates rules to generate classifications of new objects that were not present in the training data set. A specific class of AI frameworks, called Generative Adversarial Networks or GANs, seek to 'fool' such trained neural-network models into making gross errors of classification by manipulating the way neural networks compute classification probabilities. I suggest that a fruitful comparison may be made between the practice of nudging and that of the generative adversarial networks, where human and machine are flattened out on a single plane. This approach, I argue, affords new ways of thinking about human-AI ethics.

Panel P23a
Programming anthropology: coding and culture in the age of AI
  Session 1 Friday 10 June, 2022, -