Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Maya Avis
(Max Planck Institute for Social Anthropology)
Daniel Marciniak (Max Planck Institute for Social Anthropology)
Maria Sapignoli (University of Milan)
Send message to Convenors
- Format:
- Panel
- Sessions:
- Wednesday 8 June, -
Time zone: Europe/London
Short Abstract:
This panel looks ethnographically at AI assemblages in security and governance asking about pushback, continuities and transformation when AI is introduced to practice in different contexts.
Long Abstract:
For this panel, we invite papers that look ethnographically at the use of AI technologies in security and governance asking what happens in practice when technologies presented as AI are introduced. AI is often imagined as exceptionally transformative of all aspects of life. Occasionally, AI is even credited with changing human society itself. This panel will critically explore these claims and look at the power embedded in such representations of what AI can and will do in the areas of security and governance. We consider the narrative of rupture related to AI and remain attentive to the continuities that exist with and after the introduction of AI. A particular interest lies in the various forms of pushback that the introduction of AI may bring with it from concerns of deskilling for professional roles like police officers, judges, and security personnel, to public protest against technologies like facial recognition. We are also interested in cases of AI being used to hold state and corporate actors accountable. Possible questions include:
- What does the introduction of AI mean for policing practice and how does it relate to political struggles of police abolition?
- How does legislation shape (or become shaped) by the use of AI?
- How does the meaning of the future change when its prediction is mechanised?
- Do the targets of governance change with the introduction of AI?
- What is the epistemology underlying AI and how does it relate to existing forms of knowledge?
Accepted papers:
Session 1 Wednesday 8 June, 2022, -Paper short abstract:
The idea of artificial intelligence requires that human imagination and sociality be abstracted out of prediction and decision-making processes undertaken by machines. This paper explains that such abstraction has a de-humanising effect, generating novel risks of harmful governance practices.
Paper long abstract:
In order to accept the proposition that intelligence can be 'artificial' or non-human, we need to resolve a consilient model of intelligence that is functional for both computer science and social science. Currently, in a computer science context, 'intelligence' describes the efficacy of automated prediction and decision-making processes. 'Intelligent' predictions and decisions are those that yield a desirable outcome. In a social scientific context, such processes are also considered imaginative and social. Imaginatively, we draw on culturally specific systems of ideas when making predictions about the possible effects of our actions. Socially, we learn our ideas from one another, we use those ideas to make predictions and decisions about our relationships with one another, and we use our real-world experience of those relationships to modify our ideas.
In a computer science context, it may seem intuitively plausible to exclude human imagination and sociality from prediction and decision-making processes. Once a system of ideas is embedded in code, that code will run independently, according to its pre-programmed logic and available data sources. However, as with all human technology, the observable effects of this de-humanisation are not the removal of human imagination or sociality, but rather their abstraction. This paper describes the process of feedback between increasingly abstract ideas about artificial intelligence, and the accelerating instantiation of those ideas in real-world, AI-mediated interactions between people. The paper then explains that such accelerating abstraction also has a de-humanising effect, generating novel risks of harmful governance spanning healthcare, education, economics, justice and environmental management.
Paper short abstract:
In this article, we examine the role of arbitrariness in relation to how AI is used in the service of state power in the US and Palestine/Israel.
Paper long abstract:
In this article, we examine the role of arbitrariness in relation to how AI is used in the service of state power. The introduction of AI in governments’ decision making across the world in fields from social benefits to security has led to fierce debate on questions of the bias perpetuated by basing decisions on data that encodes existing inequalities and injustices within it. Cathy O’Neil has termed these automated, large-scale decision-making systems ‘weapons of math destruction’. By contrast, proponents of AI argue that the absence of human decision-makers also removes their biases and therefore leads to fairer outcomes. We seek to paint a more complex picture by examining the development of predictive policing software in the United States and its stated goals of improving the spatial allocation of police patrols. Here, we highlight the deep contradiction between automating police stop & search strategies so that they eradicate racial bias and the violence of interrupting an innocent person’s life. Building on this ‘arbitrariness as result’, we examine what we consider ‘arbitrariness by design’ in the automated creation of target banks and so-called ‘Facebook arrests’ carried out by Israeli forces across Palestine/Israel. Here, AI and the narratives around its adoption, together with the seemingly arbitrary application of force, contribute to a state that governs through its unpredictability, rather than arbitrariness being an (unintended) outcome of the way AI is used.