Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Matthias Wienroth
(Northumbria University)
Angela Paul (Northumbria University)
Jodyn Platt (University of Michigan)
Mackenzie Jorgensen (Northumbria University)
Paige Nong (University of Minnesota)
Kyle Montague (Northumbria University)
Mavis Machirori (Ada Lovelace Institute)
Carole McCartney (Leicester University)
Send message to Convenors
- Format:
- Combined Format Open Panel
Short Abstract
New AI-driven surveillance technologies affect trust and legitimacy of healthcare and criminal justice systems. This panel seeks to explore how and why this may occur, with what consequences, and how this may be addressed through research and policy.
Description
Both healthcare and criminal justice systems require monitoring of populations and individuals to achieve their goals of improved health and justice. Yet, large-scale monitoring comes with concerns about how people will be impacted. Even without advanced surveillance, health and criminal justice systems contain many inequities. AI with automated decision-making and re-identification capabilities expands the scope and scale of surveillance as well as the danger of entrenching and introducing invisible inequities. This demands further scrutiny of the intersection of existing pathways with emerging technological and sociotechnical innovations. Changes are likely to affect the legitimacy of and trust in these systems, raising critical questions about how these systems can become more equitable and trustworthy as they deploy new surveillance technologies and knowledge practices.
This panel invites empirical and conceptual work that engages with AI surveillance in either healthcare or criminal justice (or both) to develop a comparative discussion on how developments are researched and considered. Each session will consist of three presentations followed by a roundtable.
The panel seeks contributions on the following topics: how ideas for and uses of AI surveillance technologies interact with issues of trust and legitimacy within healthcare and criminal justice; what these aforementioned interactions can tell us about achieving the goals of health and justice; which logics and dynamics drive these developments (e.g. public good, zero-sum, competition, equity, etc.); what impacts can emerge from the use of AI surveillance for different groups (e.g. refusal of care, over-policing, under-policing, etc); which methodologies, conceptual ideas, lines of enquiry, and mechanisms are needed to bring about the evidence base on uses and impacts of AI surveillance tech, and to ensure that AI surveillance technologies do not reproduce historic biases.