- Convenors:
-
Josiah Akande
(Eruditepen Research and Writing Agency)
Abiola Moses Akinwale (University of Ibadan, Nigeria)
Send message to Convenors
- Format:
- Paper panel
- Stream:
- Digital futures: AI, data & platform governance
Short Abstract
This panel examines how artificial intelligence and data-driven technologies are reshaping criminal justice systems and security governance in the Global South and North. The panel interrogates whether AI can advance justice/merely deepen existing power asymmetries in criminalization and punishment.
Description
From predictive policing algorithms deployed in urban centers across the Global South to biometric surveillance systems managing migration and borders, AI technologies promise efficiency, objectivity, and enhanced security. Yet these same systems often encode and amplify existing inequalities, reproducing racial biases, criminalizing poverty, and expanding state surveillance capacities in ways that disproportionately affect marginalized communities.
This panel critically examines how AI and data science are reshaping security governance, asking whether these technologies can advance justice or merely digitize historical patterns of control and exclusion. Papers might explore the deployment of facial recognition in policing, algorithmic risk assessment tools in bail and sentencing decisions, the role of private tech corporations in shaping security policy, and the growing digital infrastructure of borders and migration control. We particularly welcome contributions examining grassroots resistance to surveillance technologies, community-led accountability mechanisms, and alternative visions of public safety that challenge techno-solutionist approaches.
The panel interrogates fundamental questions about power in the digital age: Who designs these systems and whose interests do they serve? How do AI technologies reconfigure relationships between states, corporations, and citizens? What forms of agency emerge in response to algorithmic governance? Can development frameworks adequately address the justice implications of predictive technologies, or do we need entirely new paradigms for understanding digital rights, bodily autonomy, and freedom in surveillance societies? This panel invites interdisciplinary dialogue between criminology, data science, development studies, and critical technology studies to reimagine security beyond punitive and extractive logics.
Accepted papers
Paper short abstract
This paper examines how predictive policing systems turn algorithmic risk into a governing logic in contemporary security institutions. It explores how the systems reshape authority and accountability, and how forms of resistance emerge across criminal justice contexts in the Global South and North.
Paper long abstract
Predictive policing systems increasingly shape how security institutions anticipate crime and justify intervention. Rather than treating these systems as neutral tools that assist human decision-making, this paper approaches predictive policing as a form of security governance in its own right, one that embeds statistical risk into everyday practices of policing and punishment. As risk scores gain authority, they quietly redefine what counts as reasonable suspicion, how responsibility is assigned, and where accountability is located when harm occurs.
The paper focuses on how predictive policing models travel across the Global South and North through shared vendors, technical standards, and policy imaginaries. While these systems are introduced in very different institutional settings, they carry similar assumptions about crime, and uncertainty. The result is recurring patterns in which algorithmic risk legitimises expanded surveillance while dispersing responsibility across police agencies and institutions. In this context, questions of justice are increasingly mediated by data infrastructures that are difficult to contest or even fully understand.
The paper also examines emerging forms of resistance to predictive policing, including legal challenges, institutional refusals, and collective efforts to question the authority of algorithmic risk itself. These responses are treated as governance practices rather than reactive opposition, revealing how agency persists within systems of algorithmic control. By foregrounding the political and institutional work performed by predictive technologies, the paper contributes to ongoing debates about whether AI can support more just forms of security governance, or whether it consolidates new modes of digital control under claims of efficiency and objectivity.
Paper short abstract
Algorithmic policing increases gendered surveillance under the guise of protection. This study develops a care-based framework for non-punitive, responsible AI governance and uses feminist ethics of care to demonstrate how "surveillance care" regulates women while undermining relationship security.
Paper long abstract
Algorithmic policing is increasingly justified as preventive care in contemporary security regimes. However, these systems reshape security governance in profoundly gendered ways through widespread surveillance that disproportionately targets women and other underprivileged groups. Current studies on racial injustice, bias, and accuracy abound on AI governance and predictive policing. Although feminist interventions have brought attention to surveillance as a site of gendered control, mainstream criminology and data science debates continue to prioritise risk management and punitive security logics over relational and ethical concerns. What remains under-theorised is how algorithmic policing reframes surveillance as care, masking coercion as protection and normalising the governance of vulnerability. This rhetorical shift obscures the ethical violence embedded in data-driven security practices and silences feminist critiques of relational harm and moral responsibility. Using conceptual analysis and critical interpretation of policy discourses on AI-enabled policing, the research employs a feminist philosophical technique based on the ethics of care to challenge the moral presumptions behind algorithmic security systems by bringing together critical technology studies and feminist care theorists. I contend that "surveillance care" is a gendered form of governance that evacuates real care, relational accountability, and consent while disciplining bodies through predictive risk. The conflict between care as algorithmic control and care as lived relational practice is shown by feminist ethics of care. The paper contributes a normative feminist framework for reimagining security governance beyond punitive and extractive logics, advancing care ethics as an ethical constraint on AI-driven policing and expanding feminist philosophy’s engagement with digital security futures.
Paper short abstract
Comparing Cape Town's operational predictive policing with Lagos's emerging surveillance infrastructure reveals how colonial legacies and corporate power shape algorithmic security governance across African cities, demanding new accountability frameworks.
Paper long abstract
This paper comparatively examines predictive policing adoption in Cape Town and Lagos, revealing how colonial legacies, political economies, and urban geographies shape AI-driven security governance. Drawing on 18 months of multi-sited ethnographic research, I analyze two divergent trajectories of algorithmic policing in major African cities.
Cape Town deploys operational predictive algorithms forecasting gang violence in Bellville and the Cape Flats, while Vumacam's license plate recognition network generates thousands of daily alerts. These systems extend South Africa's spatial surveillance history—from apartheid pass laws to contemporary management of racialized inequality—yet face growing civil society resistance challenging discriminatory impacts.
Lagos follows a different path: predictive policing remains in pilot phases, but the city has invested hundreds of millions in Chinese-built "safe city" infrastructure—CCTV cameras, facial recognition systems, and centralized monitoring centers from Huawei and ZTE. This surveillance apparatus creates the technical capacity and data infrastructure for algorithmic crime prediction while questions of data sovereignty, corporate influence, and accountability remain unaddressed.
Comparing Cape Town's operational systems with Lagos's emergent infrastructure reveals how global technologies are locally adapted, resisted, and reimagined. I argue that understanding AI governance in African cities requires analyzing not merely implemented algorithms but the broader sociotechnical assemblages—surveillance networks, public-private partnerships, colonial spatial logics—that make predictive systems possible, contested, and consequential. This comparative approach illuminates critical intervention points for accountability and alternative visions of urban security beyond punitive algorithmic control.