Predictive policing, the algorithmic construction of crime risk areas, has to be grasped as a socio-technical process. This implicates to carefully analyse the development and legitimisation processes of such technologies, as well as the practical effects of their utilisation.
Throughout the world, police departments have started to implement predictive policing software in order to generate geospatial crime predictions. Algorithmically calculated "risk areas" thereby indicate spaces where crimes are estimated to happen with increased likelihood. This process of risk computation is notably a socio-technical one, as the predictions are products of algorithmic analyses of crime data that result in geospatial visualisations which have to be interpreted and acted upon by police officers on the street level.
Conceptualising the use of police prediction software as socio-technical environments of interaction, we aim to specify the concrete distribution of agency between human operators and technological tools when it comes to the creation of crime risk. This means to carefully analyse the development processes of the prediction technologies that are used, and to highlight the discourses and expectations that are tied to these innovations. How are implementation processes or pilot runs legitimised? What role do the developers and distributors of technologies play in these contexts?
It also implicates to analyse the practical effects of the sociotechnical interaction of predictive policing, notably the changing ways of policing in certain risk areas and the potential implications for police law and criminal justice. Who is controlled in such areas by the police, how is suspicion created, and what role does the prediction technology play in this process? In general terms: How is police work modified by the utilisation of prediction software? What epistemic and practical effects can be analysed?