Log in to star items.
- Convenors:
-
Matthias Wienroth
(Northumbria University)
Angela Paul (Northumbria University)
Jodyn Platt (University of Michigan)
Mackenzie Jorgensen (Northumbria University)
Paige Nong (University of Minnesota)
Kyle Montague (Northumbria University)
Mavis Machirori (Ada Lovelace Institute)
Carole McCartney (Leicester University)
Send message to Convenors
- Chairs:
-
Matthias Wienroth
(Northumbria University)
Carole McCartney (Leicester University)
- Discussants:
-
Jodyn Platt
(University of Michigan)
Mavis Machirori (Ada Lovelace Institute)
- Format:
- Combined Format Open Panel
Short Abstract
New AI-driven surveillance technologies affect trust and legitimacy of healthcare and criminal justice systems. This panel seeks to explore how and why this may occur, with what consequences, and how this may be addressed through research and policy.
Description
Both healthcare and criminal justice systems require monitoring of populations and individuals to achieve their goals of improved health and justice. Yet, large-scale monitoring comes with concerns about how people will be impacted. Even without advanced surveillance, health and criminal justice systems contain many inequities. AI with automated decision-making and re-identification capabilities expands the scope and scale of surveillance as well as the danger of entrenching and introducing invisible inequities. This demands further scrutiny of the intersection of existing pathways with emerging technological and sociotechnical innovations. Changes are likely to affect the legitimacy of and trust in these systems, raising critical questions about how these systems can become more equitable and trustworthy as they deploy new surveillance technologies and knowledge practices.
This panel invites empirical and conceptual work that engages with AI surveillance in either healthcare or criminal justice (or both) to develop a comparative discussion on how developments are researched and considered. Each session will consist of three presentations followed by a roundtable.
The panel seeks contributions on the following topics: how ideas for and uses of AI surveillance technologies interact with issues of trust and legitimacy within healthcare and criminal justice; what these aforementioned interactions can tell us about achieving the goals of health and justice; which logics and dynamics drive these developments (e.g. public good, zero-sum, competition, equity, etc.); what impacts can emerge from the use of AI surveillance for different groups (e.g. refusal of care, over-policing, under-policing, etc); which methodologies, conceptual ideas, lines of enquiry, and mechanisms are needed to bring about the evidence base on uses and impacts of AI surveillance tech, and to ensure that AI surveillance technologies do not reproduce historic biases.
Accepted contributions
Session 1Short abstract
This paper examines contested legitimacy in AI-enabled consumer surveillance through Ring’s partnerships with police. Using critical discourse analysis of company documents and public backlash, it traces opaque data-sharing networks and shifting claims about privacy and trust.
Long abstract
The rapid diffusion of AI-enabled consumer surveillance technologies is reshaping relationships between private technology firms, public authorities, and citizens. This paper examines the contested legitimacy of such systems through the case of Ring and its partnerships with law enforcement. Since the late 2010s, Ring has promoted collaborations with police departments through the Neighbors platform, allowing officers to request footage from residents in the name of “community safety.” However, these partnerships have raised persistent concerns about opaque data-sharing practices and the expansion of privatized surveillance infrastructures.
The paper focuses on the 2026 controversy surrounding Ring’s planned integration with license-plate recognition company Flock Safety and fears that such systems could facilitate collaboration with U.S. Immigration and Customs Enforcement. Following significant public backlash, including criticism across online forums and social media, Ring clarified that its programs serve “public safety agencies” rather than federal immigration authorities and subsequently ended the partnership.
Using critical discourse analysis of company statements, archived law-enforcement request documentation, privacy policies, and user commentary, this paper traces how Ring constructs narratives of privacy protection while maintaining provisions for disclosure under lawful requests. The case highlights how ambiguous governance arrangements surrounding AI-enabled surveillance platforms can erode public trust. By examining the discursive management of these controversies, the paper contributes to STS debates on how publics normalise and contest AI-driven surveillance infrastructures in contemporary policing ecosystems.
Short abstract
English and Welsh police forces are rapidly deploying live facial recognition despite limited regulation, posing a threat to fundamental rights. We assert that future regulation and best practices should prioritise the PAST principles: proportionality, accountability, safety, and transparency.
Long abstract
Police forces across Great Britain are adopting live facial recognition systems (LFR) rapidly; yet regulation is falling behind. The technology is notorious for its arbitrary deployment, enabling large-scale, real-time biometric-data processing and its dependence on broad watchlists, exacerbating targeted surveillance. LFR strains fundamental rights, including privacy, freedom of expression, and freedom of assembly and association (Articles 8, 10, and 11, European Convention on Human Rights).
LFR carries misidentification risks which can amplify bias and violate anti-discrimination law. Certain demographics are disproportionately affected; for example, Met Police data shows Black men are flagged at rates exceeding their representation in London’s population and, in 2025, of the people misidentified by LFR in London, 80% of them were Black. The recent case of Shaun Thompson, a Black man stopped outside London Bridge station due to misidentification by LFR, illustrates this “stop and search on steroids.”
Currently, no specific regulation governing LFR in England and Wales exists. In the Bridges (2020) decision, the Court held that inadequate bias testing was done and LFR use breached privacy rights; this judicial guidance does not substitute for statutory regulation. In terms of potential progress, in December 2025, the Home Office launched a consultation on regulating biometric technologies, including LFR.
In this contribution, we examine the current state of LFR across policing in England and Wales, and the implications of the regulatory gap. We argue that the ever-present PAST principles: proportionality, accountability, safety, and transparency, should guide ongoing and future LFR regulation, and ground LFR best practices.
Short abstract
The paper concerns AI weapons detection, a surveillance tool that UK police are trialling. We examine how the tool fits into existing legal frameworks, influences police discretion, and impacts the public. We argue that such tools may reconfigure how suspicion is formed and justified.
Long abstract
This paper introduces artificial intelligence (AI) weapons detection, a probabilistic AI tool currently being trialled in UK policing. The proponents of the technology assert that it is a solution to the rising knife crime in England and Wales and that it will lessen reliance on traditional stop and search practices, thereby enhancing public trust in policing. We argue that the process is reversing ‘stop and search’ to ‘search and stop'. Further, without appropriate governance, we claim that such systems risk reinforcing existing inequalities while complicating the transparency essential for lawful policing.
Traditional stop and search practices in the UK, both historically and in the contemporary world, have been criticised for their disproportionate impact on minority populations. The weapons detection systems are reliant on pattern recognition algorithms, which are known to inadvertently reproduce discrimination through technical bias. We also place AI weapons detection within the existing legal framework in the UK, including the legal test of ‘reasonable suspicion’, which requires both subjective belief and objective justification based on specific, articulable facts. As AI tools increasingly intersect with frontline judgement, we need to understand how their outputs might influence, support, or obscure this reasoning process.
Following the socio-legal analysis, the paper also draws on empirical data from interviews with police officers, legal professionals, and civil society participants to explore how reasonable suspicion is constructed, applied, and scrutinised in practice. The findings highlight tensions between frontline pragmatism, accountability mechanisms, technological optimism, and concerns about transparency and fairness.
Short abstract
Despite moral claims, AI experiments with “good”, value-sensitive surveillance require “data sacrifices.” The contribution examines the case of AI-based behavioral recognition technology in Hamburg (Germany) to demonstrate how AI is technologically and discursively purified to justify sacrifices.
Long abstract
Visual surveillance has been at the forefront of AI development in criminal justice for many years. It has also been subject to intense public scrutiny and civic criticism. This is particularly true of facial recognition systems, which have become synonymous with high-risk AI. Responding to public criticism, police and developers have sought to develop and introduce surveillance systems that refute public critique of facial recognition and mass surveillance, “like in China.” This contribution examines public discourses and stakeholders’ understandings of the testing and implementation of an AI-supported behavioral recognition system in the city of Hamburg (Germany). The police surveillance technology is legitimized as “good” surveillance in explicit opposition to high-risk systems. However, it will be shown that, despite their moral claims, AI experiments with “good” surveillance require different forms of “data sacrifices” (Knopp 2026), i.e., data for training and testing algorithms, systematic errors, organizational adaptations, and new laws that encroach on civil liberties. The presentation demonstrates how data sacrifices are justified by technological and discursive purification. Building on the notions of purification in laboratory studies (Bruno Latour) and the sociology of religion (Emile Durkheim), the presentation discusses the discursive justification for an open-ended technology in an experimental setting characterized by uncertainty about the outcomes of AI development. Furthermore, it points to the work of critique that contests the legitimizing proofs and claims of AI proponents. The presentation thus contributes to the panel by unraveling the interplay between critique and justification in AI surveillance experiments.
Short abstract
This paper develops further an understanding of biometric surveillance data as ‘proxy data’ for categorical (or group-relevant) attribution and sets out some of the risks arising from this transformational process in biometric data collection, analysis, and use.
Long abstract
Biometric technologies are increasingly central to arguments made around justice and security in society, thus impacting policies, practices, public life, and people. Biometrics is about producing, defining, prioritising, and ignoring certain knowledge about humans. As part of surveillance technologies, they collect data from individuals, but aggregate and analyse these data, creating group categories of characteristics that are deployed in risk assessments. These abstracted data are then re-applied to individuals to assess potential risk levels and either grant or deny access to services. A further element in this process is the increasing desire to implement automated decision-making in such processes. This can lead to discriminatory effects that are not of justice and security value. Therefore, there is urgent need for critical conceptualisation of knowledge processes in biometric surveillance for identification.
This paper grapples with some of the theoretical background against which we may understand biometric surveillance. It develops further an understanding of biometric surveillance data as ‘proxy data’ for categorical (or group-relevant) attribution and sets out some of the risks arising from this transformational process in biometric data collection, analysis, and use. The paper considers some of the efforts to understand and engage with biometric data collection and uses for surveillance purposes. The discussion here aims to inform thinking around how we may be able to imagine mitigation strategies and needs for knowledge-producing processes in surveillance assessments.