In the last ten years in North America and England, predicting the probability of a future crime in space and time has become an important program of research and experiments that local managers of the police labeled "Predictive Policing". Behind this label, often associated to movie science fiction in the media, there are criminologists, mathematicians and computer scientists who seek to use simulation and tools of artificial intelligence to test theories of the "science of crime". The key concept on which this project is based is "repeat victimization" which refers to the idea that a first victimization is the best way to predict future victimization. In this presentation, I will focus on the origins of this axiom to open the black box of one of these algorithms that can anticipate victimization (especially burglary). On the internalist perspective, we observe controversies on the causes of crime, statistical models and prediction algorithms. From an external point of view, we see the old issue (1970) of a difficult reform of the police system and groups of scientists more or less allied to the police infrastructure. The main traget of this presentation is to show the political rafimication of algoritmic policing.