- Convenors:
-
Atika Kemal
(University of Essex, UK)
Gianluca Iazzolino (Global Development Institute, University of Manchester)
Send message to Convenors
- Format:
- Paper panel
- Stream:
- Digital futures: AI, data & platform governance
Short Abstract
This panel explores how AI reshapes development through power, inequality and digital dependency. It highlights Big Tech’s role in recasting AI as a critical tool to address sustainability challenges, glossing over data colonialism, surveillance, and the environmental impact of data infrastructures.
Description
As artificial intelligence (AI) technologies increasingly shape economies, governance and development agendas, they raise urgent questions about power, inequality and sovereignty- particularly in the Global South. The key role of Big Tech in crafting hegemonic narratives of AI for sustainability and building data infrastructures brings development into a contested terrain, especially amid funding cuts and global uncertainty (Roberts, 2024; Nayak & Walton, 2024).
This panel interrogates the political economy of AI systems, calling for a reflection on how power relationships underpinnig AI systems reshape agency, notions of statecraft and often, reinforce historical patterns of dependency, exclusion and digital dispossession (Couldry & Mejias, 2019; Birhane, 2021; Heeks, 2022; Cieslik & Margócsy, 2022). Moving beyond techno-solutionist narratives, the panel foregrounds socio-technical, relational and context-specific dynamics (Avgerou, 2019; Noorman & Swierstra, 2023). It challenges depoliticised narratives of digital development (Iazzolino & Stremlau, 2024) by highlighting how AI, as a capitalist tool, supports corporate monopolies, reshapes labour markets (Teutloff et al., 2025), governance structures (Buhmann & Fieseler, 2023) and enables digital authoritarianism (Heeks et al., 2024).
We invite contributions that critically explore how AI is governed, resisted or repurposed in development contexts- particularly in postcolonial settings. Topics include, but are not limited to: AI geopolitics; role of Big Tech firms in shaping AI development trajectories and AI infrastructure in South-South or South-North partnerships/competition; national AI strategies; algorithmic injustice; agency shifts in power; surveillance; data colonialism; and bottom-up resistance. This panel aims to foster interdisciplinary dialogue on AI technologies, global inequality and development futures.
Accepted papers
Paper short abstract
Amidst growing understanding of how AI can reinforce existing inequalities and generate others, this paper foregrounds the potential of AI applications for agency shifts in power and bottom-up resistance, drawing on examples from around the world that further climate and environmental justice.
Paper long abstract
There is growing understanding and debate on the ways that AI technologies, infrastructure and knowledge reinforce existing inequalities and generate others. This paper provides a counter-point to techno-solutionist critiques by taking a critical view of the conditions under which AI is being repurposed in development contexts to address injustice and tackle unequal power relations. It does so by foregrounding examples of the emancipatory use of AI by marginalised communities and environmental justice activists in the Global South and marginalised areas of the Global North. It argues that growing deployment of AI-enabled tools such as satellite imagery analysis, machine learning for pollution detection, and automated evidence compilation can help to identify, publicise, and challenge environmental harms. The aim is to critically examine whether and how AI can be appropriated from below to resist environmental injustices and produce counter-expertise.
Paper short abstract
Pakistan’s AI policy reflects power asymmetries, tech dependency, and centralised control - shaped by geo-political influences that raises legitimacy and ethical concerns. The study urges context-sensitive and justice-oriented reforms for more inclusive AI development and governance.
Paper long abstract
As Artificial Intelligence (AI) technologies rapidly transform organisations and society, global and national policy emerges to regulate AI development and governance frameworks. However, primarily, the global AI policy landscape remains dominated by Western Eurocentric perspectives that underlines the authoritarian role of Global North in agenda setting. While the Global North largely determines the strategic priorities that shape AI development and governance in the West, power asymmetries may directly/indirectly influence national AI policies in the Global South. It is noted that the majority of policy literature underrepresents Pakistan’s position as a Global South actor. Hence, the study critically analyses Pakistan’s Artificial Intelligence (AI) policy through a political economic lens. The policy, while framed around innovation and development, reflects deeper dynamics of geo-political influence, power asymmetry, and sovereignty vs technological dependency which shape national ambitions for digital transformation. Rooted in a centralised, top-down framework, the policy demonstrates limited inclusion of civil society, marginalised groups, or grass-root stakeholders, raising deeper concerns about democratic legitimacy, transparency and ethical regulation. By situating AI policymaking within broader global and domestic power relations, this paper underlines the need for a more context sensitive, and justice-oriented approach for inclusive AI innovation and regulation.
Paper short abstract
This paper examines how power asymmetries shape AI governance, showing how formal digital trade rules and informal political economy dynamics—such as lobbying, capacity gaps, and geopolitics—drive development outcomes and constrain regulatory sovereignty over AI in the Global South.
Paper long abstract
As AI increasingly shapes global economic relations, asymmetries between rule-makers and rule-takers have become more pronounced, particularly in the Global South. Integrating political economy analysis into AI governance is therefore essential to understanding—and preventing—the reproduction of power imbalances and economic inequalities within digital economies (Banga and Hernandez 2021; Beyleveld and Sucker 2023).
This paper interrogates the evolving architecture of AI governance by examining how formal rules—such as trade agreements, digital trade chapters, and regulatory standards—interact with informal norms. It employs John Gaventa’s ‘powercube’ framework to examine how ‘visible’, ‘hidden’, and ‘invisible’ forms of digital power are opening or closing spaces of decision-making on digital trade and AI flows; the actors that are shaping the dominant narrative on digital trade; and their incentives and motivations. Adopting a political economy approach, it explains why particular relationships, alliances, and disagreements emerge around digital trade and AI rules.
Drawing on thematic analysis of policy documents, and in-depth elite interviews, including with high-level digital trade policy makers of developing economies, the analysis shows that beyond the text of formal agreements, underlying political incentives, strategic alignments, and informal “rules of the game”—including lobbying networks, capacity asymmetries, and geopolitical interests—play a decisive role in determining whose preferences prevail and how digital trade rules operate in practice. In exploring this, the paper offers a more grounded account of how formal and informal power interact in shaping AI outcomes.
Paper short abstract
This paper compares competing AI pathways in developing contexts, contrasting digital authoritarianism with developmental governance. It examines how political institutions shape AI use for control, capacity-building, and development outcomes.
Paper long abstract
This paper examines how artificial intelligence (AI) is reshaping governance trajectories in developing and transitional economies through two competing pathways: digital authoritarianism and developmental governance. While AI is frequently promoted as a neutral tool for efficiency, service delivery, and innovation, its deployment is deeply embedded in political and institutional contexts that shape development outcomes.
The first pathway, digital authoritarianism, emphasises AI-enabled surveillance, predictive policing, population monitoring, and behavioural control. In this model, AI strengthens executive power, reduces political accountability, and normalises data-driven social control, often under the banner of security, efficiency, or crisis management. Rather than enhancing state capacity in a developmental sense, such uses of AI risk undermining institutional trust, civil liberties, and inclusive growth.
The second pathway, developmental governance, frames AI as a public infrastructure supporting administrative capacity, transparency, and equitable service provision. When embedded in robust regulatory frameworks, independent oversight mechanisms, and participatory governance structures, AI can enhance policy effectiveness and contribute to long-term development goals. This pathway aligns more closely with the principles of accountable institutions and inclusive decision-making emphasised in SDG 16.
Drawing on comparative examples from emerging economies and hybrid political systems, the paper adopts a political economy approach to analyse how power relations, market structures, and institutional design determine which pathway prevails. It argues that AI does not inherently produce authoritarian or developmental outcomes; rather, outcomes depend on governance choices, regulatory capacity, and the balance between public interest and private technological power.
Paper short abstract
South Africa's tech ecosystem shows how venture capital reproduces racial exclusion and extractive innovation. This paper argues efficiency operates as capitalist ideology, not neutral optimisation, privileging extraction over transformation and encoding colonial hierarchies into digital futures.
Paper long abstract
Artificial intelligence is positioned as the transformative core of contemporary digital economies, yet critical interrogation of who controls AI development, whose rationality shapes its design, and who benefits from its deployment remains sparse, particularly from Global South perspectives. This paper examines the political economy of AI-driven innovation through empirical research on South Africa's technology startup ecosystem, where AI constitutes the foundational infrastructure of platform-based business models.
Drawing on analysis of 120 startups launched between 2013-2018 and a national foresight study examining digital economy futures (2025-2035), this research reveals how, three decades after apartheid's end, venture capital financing structures continue to produce demographic exclusion and extractive innovation trajectories. The ecosystem exhibits stark racial stratification: in Cape Town, 80% of funded startups were white-founded, while nationally, 60% concentrated in fintech and e-commerce platforms optimised for transaction extraction rather than productive transformation. Black entrepreneurs remain confined to survivalist ventures sustained by philanthropic funding that cannot support scale.
Theoretically, the paper draws on Marcusé's critique of technological rationality to demonstrate how efficiency - AI's dominant ideological goal - operates not as neutral optimisation but as capitalist discipline that privileges extraction over transformation. Using Bonilla-Silva's racial praxis and Calderón's epistemic violence frameworks, the analysis shows how supposedly neutral market mechanisms reproduce structural racism.
Rather than democratising development, South Africa's AI-driven digital economy replicates colonial patterns of extraction and exclusion through new technological forms. The paper demonstrates that AI's political economy fundamentally determines whether these technologies serve post-apartheid transformation or reproduce inequality.
Paper short abstract
The paper seeks to examine the Socio-Economic impacts of AI in the context of North and South divides and how neo-colonial dynamics and narratives impacts on the role of IA between the North and Southern states in Nigeria.
Paper long abstract
The emergence of AI and the drive for transformation in several facet of life for rapid change in the global north has been evident in science and technological advancement, ecosystem, health and general welfare has been recorded. The necessity of the adoption of AI points to challenges on Nigeria’s development sectors at several segments. It is often to assumed that the current AI practice is a tutelage of the global north hence tends to strengthen the narratives that global south stands less to benefit from the processes. The paper seeks to understand the north and south inconsistency and lopsided access to AI practice between the northern and southern states in Nigeria.
The justification arises due to global south are lagging behind including media bias, neo-colonial legacies. Even though Nigeria may appear to be performing satisfactorily in terms of AI usage is still having setbacks in different forms including lack of skilled professionals and required systems for progress and growth is poor across board in different sectors.
In development sectors, AI algorithms are revolutionising practices in medical image analysis and personalised medicine, finance sector is embracing AI for tasks including fraud detection and algorithmic trading. Countries like Germany, Canada, United Kingdom, USA, the Netherlands, and Singapore are at the forefront of integrating AI into transportation systems. These trends begs for strategies to understand the new dynamics in developing countries, with focus on southern and northern Nigeria in particular
Paper short abstract
This paper examines power asymmetries created by the digitalisation of food assistance, in particular through the extraction of data from marginalised populations, and its storage and use by Big Tech and global finance. Including analysis from Sudan, India and England.
Paper long abstract
Over the past decade, digital technologies have become an integral part of food assistance and social welfare globally. This includes biometric ID documents, electronic vouchers, debit-type cards and mobile money transfers, that are used on the most marginalised and food insecure populations. At the same time, they have the potential to feed into structural inequalities, as almost all digital technologies involve powerful national and global businesses.
In this paper, we compare findings from research on the political and economic effects of digitalising food assistance in Sudan, India and England, and examine global implications. We argue that marginalised populations are coercively included in digitalised food assistance, and that digitalisation entails intrinsic power asymmetries through political, economic, and cultural domination. We reflect in particular on data extraction and assetization, and the role of multi-national – often US-based - financial and data management corporations (Big Tech) in storing and processing the data collected from marginalised, poor, and crisis-affected populations. We also look at the role of national and regional institutions (including banks and telecoms companies), their practices, and local, national and international effects. Finally, we consider whether their involvement can be conceptualised as forms of data colonialism, techno-feudalism, or digital imperialism, given that it does not involve the physical occupation of territory and that it includes the Global North. On a practical level, we consider implications for addressing food insecurity.
Paper short abstract
The rapid growth of AI/Cyber tools in the West African economy has created new opportunities for innovation, commerce, and digital inclusion. However, AI applications/expansion have also exposed the region to heightened cyber risks, agency shifts in power, surveillance, and data colonialism.
Paper long abstract
This paper examines the current state of cybersecurity in West Africa through the lens of resilience theory, focusing on national policy frameworks, institutional capacity, and regional coordination mechanisms. The rapid growth of the digital economy and emerging AI dependency in West Africa has created new opportunities for innovation, commerce, and digital inclusion. However, digital expansion has also exposed the region to heightened cyber risks, underscoring the urgency of developing robust cybersecurity frameworks. Drawing on a textual analysis methodological framework, the paper highlights fragmented policy landscapes, weak regulatory enforcement, and under-resourced institutions as key vulnerabilities. Another key finding is that, although regional bodies, including ECOWAS and the African Union, have initiated AI/cybersecurity protocols, their implementation remains uneven and underfunded. Simultaneously, while governments have begun adopting digital/AI strategies, relying on external actors for technical expertise and infrastructure raises questions about digital sovereignty. Further, limited awareness in AI/Cybersecurity initiatives has strengthened individuals’ ability to recognize common AI/cyber threats and adopt basic protective behaviors. However, these gains are often offset by organizational vulnerabilities, including weak data protection practices, surveillance, agency shifts in power, data colonialism, and inadequate technological safeguards. Collectively, AI constrains trust in investments in digital economies in West Africa. We recommend a shift toward coordinated, well-resourced, and contextually grounded AI/cybersecurity strategies that align with the broader goals of AI development in the region. Also, regional coordination through ECOWAS should move beyond normative frameworks toward operational mechanisms. AI/Cyber Risk Insurance should be treated as an instrument rather than a substitute for public governance.
Paper short abstract
This paper uses evidence from India’s online food delivery sector to examine how algorithmic work management produces “algorithmic despotism,” market concentration, and data opacity, eroding worker autonomy and reinforcing labour precarity in the Global South.
Paper long abstract
This paper examines how, in India's online food delivery (OFD) sector, AI-enabled algorithmic management and platform-driven data infrastructures impact employment relations, worker agency, and work precarity. Drawing on a primary survey data collected from 326 workers across three cities in India, the paper explores how duopolistic market concentration, algorithmic governance of work, and platform-mediated entry barriers impact work autonomy. Findings support the "autonomy-control" paradox, with workers reporting long working hours (11.2 hours daily, on average), and extensive unpaid waiting time, alongside average gross monthly earnings of below Rs.24,000. Moreover, implicit platform control mechanisms include- dynamic incentive structures, explicit/implicit penalties, and gamified rewards intensify labour discipline, increase economic dependence, and further reduce effective earnings. High market concentration weakens worker bargaining power, while platform-linked vehicle loans and rental arrangements generate new forms of financial dependency and diffuse accountability across platforms and intermediaries. Algorithmic opacity, fear of deactivation, and rapid workforce replaceability challenge collective bargaining and constrain labour agency. Situating these dynamics within a Global South context, the paper extends debates on algorithmic despotism and the autonomy-control paradox by highlighting mechanism through which AI-mediated governance reproduces historical patterns of exclusion and dependency in postcolonial labour markets. It emphasizes the need for policy interventions addressing algorithmic transparency, data accountability, minimum wage enforcement, social protection, and institutionalised worker representation in platform-mediated economies.
Paper short abstract
Challenging AI “catch-up” narratives in development, this paper draws on Hymer to analyse the political economy of platform capitalism. It shows how AI operates as private infrastructure, concentrating power in Big Tech and producing firm-level hierarchy, rent extraction, and digital dependency.
Paper long abstract
Contemporary discussions of artificial intelligence and development frequently call on “catch-up” narratives, suggesting that AI diffusion may enable late-developing economies to leapfrog established industrial trajectories. This paper questions such claims by combining an analysis of platform capitalism with insights from Stephen Hymer’s theory of firm-level power and foreign direct investment.
The paper conceptualises platform capitalism as a structural transformation in which economic coordination, value extraction, and market governance are organised through digital platforms that function as privately owned infrastructures, controlled by Big Tech firms. Rather than competing within markets, firms increasingly control markets by setting rules, governing access, and extracting rents through data, algorithms, and proprietary standards.
The paper advances three arguments. First, AI-led accumulation and adoption are driven primarily by control rather than efficiency. Returns accrue disproportionately to owners of capital controlling models, data, and compute, reinforcing capital–labour asymmetries within and across countries, with implications for labour insecurity, deskilling, and bargaining power. Second, asymmetries in firm-level power precede and shape state-level outcomes. What is distinctive in the AI era is that a small number of large language model firms operate as general-purpose infrastructures, exercising cross-sectoral influence through platforms and APIs rather than sector-specific dominance. Third, technological advantage brings durable dependency through organisational hierarchy: although peripheral economies may adopt AI technologies, they remain structurally excluded from co-determining model architecture, standards, and innovation trajectories, contributing to new forms of digital dependency and dispossession.
Revisiting Hymer clarifies how AI reshapes development through the reorganisation of power, hierarchy, and coordination in contemporary capitalism.
Paper short abstract
Examines how institutional data hoarding creates digital inequality in West Africa, enabling tech firms with proprietary data to dominate while public planners pay for access and open-source tools lag, deepening dependency and spatial injustice.
Paper long abstract
This paper explores how poor data governance in West Africa—characterized by bureaucratic gatekeeping, misinterpretation of privacy laws, and informal monetization of datasets—creates a new form of digital inequality. Drawing on the donor-funded Artificial Intelligence for Energy Access (AI4EA) project, implemented in five West African states between 2024 and 2025, this study argues that the inability to access publicly funded, geocoded household energy data perpetuates regional disparities in AI-driven energy planning. While Nigeria benefits from open data ecosystems, neighboring countries remain data-poor, leading to less accurate models and inefficient electrification pathways. A key consequence is that large and foreign tech firms, able to deploy capital to collect their own proprietary data, develop more accurate AI models, consolidating their market advantage while open-source tools lag. Public-sector planners are then forced to pay these foreign firms for access to proprietary tools, creating a cycle of dependency and financial drain.
This dynamic entrenches a form of digital rent-seeking, where public data generated through donor-funded surveys is withheld, only to be replaced by privatised data products that states must purchase. The resulting inequality is twofold: it impedes local innovation and erodes digital sovereignty, as planning capabilities become outsourced. The paper frames this not merely as a technical bottleneck, but as a governance failure that reproduces and deepens socio-spatial divides. It concludes with a call for reimagining data as a public good and strengthening mandates for open data to ensure AI serves equitable, self-determined development.
Paper short abstract
AI reshapes global power by boosting corporate productivity, enhancing state military and diplomatic capabilities, and widening the gap between AI-literate and non-literate individuals. This paper analyzes these emerging economic, geopolitical, and social asymmetries.
Paper long abstract
Artificial intelligence is rapidly becoming a foundational driver of global power, transforming economic competition, geopolitical strategy, and social opportunity structures. This paper examines AI as a multifaceted power multiplier across three levels of analysis. First, it explores how AI strengthens corporate economic power by accelerating productivity, enabling new data-driven business models, optimizing global supply chains, and reinforcing market concentration among firms that control key algorithms, datasets, and compute resources. Second, it analyzes AI’s impact on states’ military and diplomatic power, including its role in autonomous systems, intelligence analysis, cyber operations, influence campaigns, and the negotiation leverage that comes from technological dominance and strategic dependencies in AI hardware and software ecosystems. Third, it investigates how AI literacy has become a new axis of social differentiation, empowering individuals who can effectively use or build AI tools while marginalizing those without such skills, thus reshaping labor markets, democratic participation, and access to knowledge.
By linking corporate, state, and individual-level dynamics, the paper argues that AI is not merely another technological innovation but a structural force re-configuring global hierarchies. It highlights emerging risks (such as power concentration, inequality, loss of sovereignty, and strategic instability) and outlines policy pathways to ensure that AI serves as a broadly distributed public good rather than a narrow amplifier of existing dominance.