Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality, and to see the links to virtual rooms.

Accepted Paper:

Arbitrary AI: an ethnographic reflection on Palestine/Israel and the US  
Maya Avis (Max Planck Institute for Social Anthropology) Daniel Marciniak (Max Planck Institute for Social Anthropology)

Send message to Authors

Paper short abstract:

In this article, we examine the role of arbitrariness in relation to how AI is used in the service of state power in the US and Palestine/Israel.

Paper long abstract:

In this article, we examine the role of arbitrariness in relation to how AI is used in the service of state power. The introduction of AI in governments’ decision making across the world in fields from social benefits to security has led to fierce debate on questions of the bias perpetuated by basing decisions on data that encodes existing inequalities and injustices within it. Cathy O’Neil has termed these automated, large-scale decision-making systems ‘weapons of math destruction’. By contrast, proponents of AI argue that the absence of human decision-makers also removes their biases and therefore leads to fairer outcomes. We seek to paint a more complex picture by examining the development of predictive policing software in the United States and its stated goals of improving the spatial allocation of police patrols. Here, we highlight the deep contradiction between automating police stop & search strategies so that they eradicate racial bias and the violence of interrupting an innocent person’s life. Building on this ‘arbitrariness as result’, we examine what we consider ‘arbitrariness by design’ in the automated creation of target banks and so-called ‘Facebook arrests’ carried out by Israeli forces across Palestine/Israel. Here, AI and the narratives around its adoption, together with the seemingly arbitrary application of force, contribute to a state that governs through its unpredictability, rather than arbitrariness being an (unintended) outcome of the way AI is used.

Panel P08b
AI as a Form of Governance: Imagination, Practice and Pushback
  Session 1 Wednesday 8 June, 2022, -