Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Maya Avis
(Max Planck Institute for Social Anthropology)
Daniel Marciniak (Max Planck Institute for Social Anthropology)
Maria Sapignoli (University of Milan)
Send message to Convenors
- Format:
- Panel
- Sessions:
- Wednesday 8 June, -
Time zone: Europe/London
Short Abstract:
This panel looks ethnographically at AI assemblages in security and governance asking about pushback, continuities and transformation when AI is introduced to practice in different contexts.
Long Abstract:
For this panel, we invite papers that look ethnographically at the use of AI technologies in security and governance asking what happens in practice when technologies presented as AI are introduced. AI is often imagined as exceptionally transformative of all aspects of life. Occasionally, AI is even credited with changing human society itself. This panel will critically explore these claims and look at the power embedded in such representations of what AI can and will do in the areas of security and governance. We consider the narrative of rupture related to AI and remain attentive to the continuities that exist with and after the introduction of AI. A particular interest lies in the various forms of pushback that the introduction of AI may bring with it from concerns of deskilling for professional roles like police officers, judges, and security personnel, to public protest against technologies like facial recognition. We are also interested in cases of AI being used to hold state and corporate actors accountable. Possible questions include:
- What does the introduction of AI mean for policing practice and how does it relate to political struggles of police abolition?
- How does legislation shape (or become shaped) by the use of AI?
- How does the meaning of the future change when its prediction is mechanised?
- Do the targets of governance change with the introduction of AI?
- What is the epistemology underlying AI and how does it relate to existing forms of knowledge?
Accepted papers:
Session 1 Wednesday 8 June, 2022, -Paper short abstract:
Criticism of the use of FRT in law enforcement has focused on privacy and data protection rights. Drawing on a landmark legal case, I examine how notions of privacy are mobilised and contested. I argue for the need to rethink privacy in relational terms to open new radical critiques of surveillance.
Paper long abstract:
While facial recognition technology (FRT) has been increasingly used by migration and law enforcement in public spaces, it is also one of the few technologies which has been legally contested and even banned (Aradau and Blanke 2021, Madiega and Mildebrath 2021). Specifically, public contestations of biometric surveillance have mostly focused on how their intrusiveness, as well as their susceptibility to error, violate the right to privacy and data protection (Almeida, Shmarko et al. 2021). Drawing on a landmark legal case around facial recognition and public discourses of law enforcement authorities in the UK, I examine how notions of privacy and data protection have been mobilised and contested in relation to biometric technologies. I show how dominant discussions on privacy are grounded in individualistic notions of rights and primarily concern the ways distinct (and opposing) individual and security interests can be balanced. I argue that, while policies continue to remain deaf to public preoccupations on privacy, these preoccupations ultimately obscure the wider impacts of one individual’s privacy for those with and to whom they are connected. I suggest that a critical engagement with biometric technologies need to complexify privacy and data protection rights in relational terms, to open new conditions for a critique of surveillance.
Paper short abstract:
In July 2020, the Sentence Risk Assessment Instrument was implemented in Pennsylvania courts to evaluate "the relative risk that an offender will reoffend and be a threat to society." Through interviews, I probe how judges interpret and use the tool's recommendations in their sentencing decisions.
Paper long abstract:
In July 2020, the Sentence Risk Assessment Instrument was implemented in Pennsylvania criminal courts to evaluate "the relative risk that an offender will reoffend and be a threat to society." Recidivism risk assessment instruments, which estimate an individual’s risk of rearrest for a future crime, are often presented as a data-driven strategy for progressive judicial reform – a way of reducing racial bias in sentencing, abolishing cash bail, and reducing mass incarceration. In much of the United States, these risk scores inform judges’ decisions including bail, pretrial release, and sentencing. However, little is known about whether and how risk assessment promotes these progressive goals in practice. Growing empirical evidence suggests that risk assessment can increase racial disparities in judicial discretion because judges may selectively disregard risk scores along racial lines and are more likely to agree with recommendations to detain defendants. Given the stakes, it is essential to understand not only risk assessment tools' construction and fairness, but also their effects on algorithmic fairness in practice – that is, how the instrument's recommendations interact with judicial discretion upon deployment. In interviews with 20 judges who use the Sentence Risk Assessment Instrument's recommendations in their felony and misdemeanor sentencing decisions, I probe how judges interpret and conform to the tool's recommendations, as well as how they weigh the role of recidivism risk in their decision-making overall. I also discuss the effects the tool has had on judges' day-to-day work and their attitudes about the controversy the tool has generated.
Paper short abstract:
Drawing on ethnographic research on smart city initiatives in China, I demonstrate how regional governments and startup companies use the logics of machine learning as a heuristic to understand how the state crowdsources policy solutions, as well as their own role within such a system.
Paper long abstract:
It is a longstanding but controversial notion that Chinese society works through top-down command chains that impose totalizing homogeneity. Anthropologists have challenged an univocal picture that overstresses hegemonic power of by exploring themes of resistance (e.g. Weller 1994) and an informal “second society” that operates beyond the state (e.g. Yang 1989). In this paper, I add nuance to this debate by providing local perspectives on how the Chinese state operates. Drawing on fieldwork I conducted in China on the local implementation of national smart city mandates, I show how regional governments and startup companies understand the operation of the Chinese state as akin to a machine learning algorithm that solves problems by parsing large data sets without explicit programming. Seeing themselves as policy-generating nodes within a nationwide machine learning assemblage, these actors understand this computational analogy to be simultaneously a reinterpretation and a seamless continuation of the late paramount leader Deng Xiaoping’s philosophy: “It doesn’t matter whether a cat is black or white, as long as it catches mice, it is a good cat.” My analysis clarifies the role of the market economy in Chinese governance under Capitalism with Chinese Characteristics. It also dispels a popular misconception about so-called Chinese “collectivism”: instead of a bland “copy-and-paste” homogeneity, Chinese governance relies on the constant collective production of an abundance of diversity, in effect enlisting local governments to form what AI practitioners call generative adversarial networks (GANs).