Log in to star items.
Accepted Contribution
Short abstract
Despite moral claims, AI experiments with “good”, value-sensitive surveillance require “data sacrifices.” The contribution examines the case of AI-based behavioral recognition technology in Hamburg (Germany) to demonstrate how AI is technologically and discursively purified to justify sacrifices.
Long abstract
Visual surveillance has been at the forefront of AI development in criminal justice for many years. It has also been subject to intense public scrutiny and civic criticism. This is particularly true of facial recognition systems, which have become synonymous with high-risk AI. Responding to public criticism, police and developers have sought to develop and introduce surveillance systems that refute public critique of facial recognition and mass surveillance, “like in China.” This contribution examines public discourses and stakeholders’ understandings of the testing and implementation of an AI-supported behavioral recognition system in the city of Hamburg (Germany). The police surveillance technology is legitimized as “good” surveillance in explicit opposition to high-risk systems. However, it will be shown that, despite their moral claims, AI experiments with “good” surveillance require different forms of “data sacrifices” (Knopp 2026), i.e., data for training and testing algorithms, systematic errors, organizational adaptations, and new laws that encroach on civil liberties. The presentation demonstrates how data sacrifices are justified by technological and discursive purification. Building on the notions of purification in laboratory studies (Bruno Latour) and the sociology of religion (Emile Durkheim), the presentation discusses the discursive justification for an open-ended technology in an experimental setting characterized by uncertainty about the outcomes of AI development. Furthermore, it points to the work of critique that contests the legitimizing proofs and claims of AI proponents. The presentation thus contributes to the panel by unraveling the interplay between critique and justification in AI surveillance experiments.
A question of trust. Artificial intelligence in surveillance in healthcare and criminal justice
Session 1