Keeping the machine in check: predictive policing and the "human in the loop"
Matthias Leese (ETH Zurich)
Paper short abstract:
This paper empirically analyzes police strategies to keep predictive policing software in check: (1) human oversight; (2) strengthening human reasoning vis-à-vis the machine; (3) invoking data quality and data protection; and (4) contextualization within larger trajectories of police work.
Paper long abstract:
Predictive policing - broadly speaking the claimed ability to forecast where and when the next crime or series of crimes will take place through algorithmically supported analysis of live crime data - has been one of the most pertinent and readily implemented new security technologies in recent years, and has sparked wide debates about algorithmic agency, decision-making, and repercussions for social justice. Based on field research within multiple German and Swiss police agencies, this paper engages the ways in which the police react to these debates, and try to keep predictive policing software "in check. The analysis identified four distinct strategies that speak to concerns about possible negative implications from data-driven crime predictions: (1) human oversight; (2) strengthening human reasoning vis-à-vis the machine; (3) invoking data quality and data protection; and (4) contextualization within larger trajectories of police work. Together, these strategies of institutional implementation and practice arguably facilitate the acceptance of new high-tech security tools and allows police agencies as well as politicians to render predictive policing (morally) legitimate in public discourse. On a broader level, the "human in the loop" here stands emblematic for larger policy-making issues around new security technologies and their institutional implementation - ranging from automation and cognitive extension up to potential full-scale automation of security tasks.
- Encounters between people, things and environments