- Convenors:
-
Praveen Priyadarshi
(Indraprastha Institute of Information Technology, Delhi (IIITD))
Manohar Kumar (Indraprastha Institute of Information Technology)
Send message to Convenors
- Format:
- Paper panel
- Stream:
- Digital futures: AI, data & platform governance
Short Abstract
The panel interrogates key considerations shaping AI regulations in the countries of the global south. Framing AI regulations, without an epistemic reimagination of technology and its relationship with development, raises questions about their normative and developmental effectiveness.
Description
The panel will focus on two broad approaches: ethical determinism and tech-solutionalism. Ethical determinism is the idea that AI governance is derived from ethical guidelines, while tech solutionism assumes technology to have solutions to all socio-economic and cultural challenges. The tech-solutionist approach, at least in part, is rooted in the colonial and post colonial legacies, especially the impact of these legacies on the imagination of technology as a force of socio-economic transformation. But framing AI regulations uncritically underplays an entire gamut of social, political, and cultural factors that shape the interplay between technology and governance. It also side-steps the unequal conditions in the production and deployment of AI technology and the unequal voice and power relations in shaping guidelines. The panel will interrogate the different imaginations and conceptions that go into framing AI governance guidelines in terms of the burdens and costs they potentially impose on the global south. To examine such epistemic contestations, the panel invites both theoretical works looking at developments and regulations in AI, as well as empirical works looking at the specific AI regulations. It will further ask what alternative imaginations, social and political processes, and values can shape it for a just, equitable and sustainable future. Papers interested in exploring the different meanings of technology and structures in which these meanings are constituted and contested are also welcome. Finally the panel invites papers that make a critical foray into the various actors, institutions, and networks that should play a critical role in shaping regulations.
Accepted papers
Paper short abstract
This paper studies AI and Sustainability-related policies to examine the socio-ecological implications of AI infrastructures in India and Mexico. It includes interviews with AI policy stakeholders in India and Mexico to understand their perspectives, rationalities and justifications in the process.
Paper long abstract
Growing concerns surround the increasing energy demands and environmental impacts of resource extraction for AI chips, running of AI data centres, and the training of models (Crawford, 2021; Hodgkingson et al., 2024; Lehuedé, 2025). Scholars have brought focus to difficulties with calculating the environmental costs of AI technologies and greenwashing, and there have been efforts to increase awareness about these impacts (Hao, 2024 and 2025; Heikkilä, 2022). Current scholarship at the intersections of AI, sustainability and policy studies has tended to focus on leveraging AI for achieving sustainability goals (Nishant et al., 2020). However, little is known about the experiences and policies that shape the lives of those most affected by new data infrastructures. This is critical because Global South countries bear a larger share of the brunt of the environmental costs associated with data centre and ICT infrastructure development, while reaping fewer benefits from digitisation (UNCTAD, 2024).
Our paper draws on an ongoing research project that examines the socio-ecological implications of AI infrastructure development in India and Mexico, with a focus on AI policies and lived experiences of communities. In this presentation, we specifically focus on AI and Sustainability-related policies, including international, Indian and Mexican policy documents related to AI technologies, as well as adjacent and associated policies (e.g. land acquisition, water resourcing, environmental clearances and labour laws). This is supplemented by semi-structured interviews (n=10) with AI policy stakeholders each in India and Mexico, to understand their perspectives on the sustainability implications of AI technologies and infrastructures.
Paper short abstract
AI governance is often framed through ethical and techno-solutionist models presented as neutral and universal. This paper argues that such approaches reproduce epistemic injustice and digital colonialism, and advances epistemic sovereignty as a decolonial alternative.
Paper long abstract
Artificial Intelligence (AI) governance is dominated by regulatory frameworks that foreground ethical principles. While these approaches are presented as neutral and progressive, this paper argues that they reproduce deep epistemic and structural inequalities when transposed uncritically into Global South contexts. Anchored in ethical determinism and technological solutionism, prevailing AI governance regimes tend to obscure the historical and cultural conditions that shape the production of AI. This paper contends that such approaches perpetuate epistemic injustice and entrench forms of digital colonialism. Wherein Global South societies become sites of data extraction and experimentation while they remain marginal to decision-making and lack equal control over technological design and governance. The asymmetrical distribution of regulatory costs and benefits raises critical questions about whose interests AI governance ultimately serves and whether existing frameworks meaningfully address issues of economic dependency and infrastructural inequality. Against this background, the paper advances the concept of epistemic sovereignty as a normative and political horizon for decolonising AI governance in the Global South. This paper further examines how AI governance is shaped by transnational technology corporations, international standards bodies, and Global North research institutions, while the Global South states, communities, and knowledge producers are marginalised. This mirrors colonial extractive relations where value is generated from Global South resources without meaningful participation or benefit-sharing. In doing so, the paper exposes how tech-solutionist approaches to AI governance obscure power asymmetries and legitimise dependency under the guise of innovation and development.
Paper short abstract
AI governance is analysed as epistemic contestation through Vietnam’s 2025 IP Law. Article 7(5) shows how legal definitions of AI training privilege technocratic knowledge, marginalise creative labour, and enable asymmetric value extraction in a Global South context.
Paper long abstract
AI governance is increasingly recognised as a site of epistemic contestation, where competing understandings of technology are stabilised through regulation. This paper examines Vietnam’s amended Intellectual Property Law (2025), focusing on Article 7(5), which permits the use of publicly available copyrighted works for AI training without author consent or remuneration. The paper argues that this provision reflects not merely a technical policy choice, but a deeper conflict over how AI and creativity are conceptualised.
Drawing on public submissions and statements from artists and creative workers, the paper identifies two competing epistemic framings. The state’s legal approach conceptualises AI training as neutral data processing and downplays human creativity in AI-generated outputs. In contrast, creative practitioners frame AI as the outcome of distributed human labour, in which training constitutes a form of value extraction that undermines authorship, economic sustainability, and control over cultural production. This epistemic disjuncture enables innovation to be legally recognised while rendering creative labour invisible.
From a Global South perspective, the paper further demonstrates how this regulatory framing facilitates asymmetric value flows. While domestic creative works become legally available as training data, the economic benefits of AI development remain concentrated in transnational platforms and model owners. AI governance in this context thus reproduces extractive development patterns, now embedded in intellectual property and data regimes.
The paper contributes to AI governance and development studies by foregrounding intellectual property law as a key site where epistemic authority over AI is negotiated.
Paper short abstract
We argue that AI governance in legal aid contexts cannot be reduced to compliance with abstract ethical norms. Instead, it must be understood as a situated political process where competing imaginaries of technology, justice, and development are negotiated reflecting 'whose knowledge counts'.
Paper long abstract
As artificial intelligence (AI) is increasingly promoted as a solution to access-to-justice challenges, debates around its governance are often framed through universal ethical principles or technocratic problem-solving. This paper challenges both ethical determinism and tech-solutionism by examining AI governance as a process of epistemic contestation within legal aid systems in the Global South. Drawing on empirical work conducted in Tanzania, the paper analyses the development of draft regulations on legal technology for access to justice consulted on during the East Africa Legal Tech for Legal Aid and Access to Justice Conference hosted at the University of Dar es Salaam in February 2025.
The regulatory process brought together legal aid institutions, paralegals, technologists, policymakers, donors, and researchers to negotiate how AI should be governed within fragile justice ecosystems marked by resource constraints, legal pluralism, and colonial legacies. Rather than adopting pre-existing global AI ethics frameworks, the resulting regulatory approach foregrounded principles of decolonisation, justice-centred design, human oversight, data sovereignty, and participatory governance. AI was explicitly limited to decision-support and administrative functions, rejecting automation of legal judgment and recognising the risks of surveillance, power concentration, and digital authoritarianism in justice systems. By foregrounding whose knowledge counts in shaping regulation, this case contributes a Global South perspective to debates on AI governance, highlighting the need to rebalance epistemic authority to ensure just, accountable, and contextually grounded AI futures.
Paper short abstract
This paper examines AI-driven revenue mobilisation in Kenya, through the integration of telco data with tax systems. It critiques tech-solutionist AI governance under Kenya’s Data Protection Act and AI Strategy, arguing that procedural compliance obscures political, and distributive challenges.
Paper long abstract
This paper critically examines tech-solutionism in AI governance through the empirical case of AI-driven revenue mobilisation in Kenya. Recently, the Kenyan government decided to integrate telecommunications and mobile money transactions data with Kenya Revenue Authority (KRA) systems as a strategy to enhance tax compliance, widen the tax base, and curb revenue leakages (Republic of Kenya, 2023). These initiatives are framed as neutral, efficiency-enhancing technological fixes to structural revenue challenges.
Considering Kenya’s Data Protection Act (2019) and the National Artificial Intelligence Strategy 2025–2030, the paper argues that such solutionist framings obscure the socio-political and distributive dimensions of taxation. While the Data Protection Act provides safeguards against solely automated decision-making and mandates data protection impact assessments (Republic of Kenya, 2019), in practice these mechanisms are operationalised as procedural compliance tools within revenue analytics systems. Application of predictive models trained on integrated data runs the danger of disproportionately targeting informal sector and intensifying surveillance without addressing underlying causes of informality.
The National AI Strategy positions AI on public sector efficiency and economic growth (Government of Kenya, 2024), reinforces a developmental narrative in which technological deployment precedes institutional readiness, and democratic oversight. This dynamic is conceptualised as epistemic displacement, whereby locally grounded understandings of informality, state–citizen trust, and fiscal justice are abandoned in favour of emerging technologies.
By foregrounding revenue mobilisation as a critical site of AI governance, the paper challenges ethical determinism and argues for a context-sensitive approach that treats AI governance as a political and distributive process rather than purely technical.
Paper short abstract
AI governance often focuses on automation while overlooking the human labour that sustains AI systems. In this paper, I examine how job displacement and hidden AI work in the Global South are interconnected, and why examining this connection matters for how AI is governed.
Paper long abstract
Recent AI developments have reshaped work in two interconnected ways. One, automation has replaced/restructured jobs in sectors such as customer service, writing, and administration. Simultaneously, AI systems depend on extensive human labour for data annotation and content moderation to function (Muldoon et al., 2024). This labour is often precarious, outsourced, and concentrated in the Global South (Plantin, 2021).
I argue that contemporary AI governance in the Global South treats automation as the primary focus while making the human labour sustaining AI largely invisible. Most policy debates and regulatory frameworks emphasise deployment, ethical use, and efficiency, while paying limited attention to working conditions and social costs embedded in AI production.
By analysing documented cases of data annotators and content moderators in India, Kenya, and the Philippines, I show how AI systems rely on sustained human judgement to classify images, interpret language, and filter harmful content (Gray & Suri, 2019; Pogrebna, 2024). The very outputs of this labour, in turn, enable further automation, contributing to job displacement elsewhere (OECD, 2019; Frey & Osborne, 2017).
By holding these two sides together, the paper reframes AI governance as an epistemic and labour question rather than a purely technical or ethical one. I show how governance frameworks that ignore this interdependence risk legitimising automation while leaving the human work behind AI unaccounted for. Recognising this relationship is crucial for developing AI governance approaches that address not only the effects of AI, but also the labour relations through which AI is made possible to begin with.
Paper short abstract
This paper examines how inequalities in AI knowledge production shape governance debates. Using original global co-authorship data, it shows how limited epistemic voice in the Global South risks reinforcing inequitable and developmentally misaligned AI regulation.
Paper long abstract
Recent debates on artificial intelligence governance increasingly emphasise ethical frameworks and regulatory principles as central tools for managing AI’s societal impacts. However, such approaches emerge under conditions of profound inequality in the production and circulation of technological knowledge. This paper examines how uneven participation in AI research shapes the epistemic foundations of AI governance and raises questions about the developmental relevance of emerging regulatory agendas, particularly their alignment with local capacities, priorities, and institutional contexts.
The analysis draws on an original dataset of AI-related research co-authorship from 2013 to 2022, disaggregated across key AI fields including Robotics, Natural Language Processing, Computer Vision, Large Language Models, and AI Safety. Collaboration patterns are analyzed based on countries’ belonging to the Global North or Global South and geopolitical groupings (United States, China, Europe, and Others). The findings reveal a persistent concentration of AI research within high-income countries, alongside highly asymmetric cross-regional collaborations. Field-level analysis further shows that AI Safety remains a comparatively less developed area of AI knowledge production, indicating potential entry points for broader participation by the Global South.
Participation in knowledge production matters because it shapes problem definitions, risk perceptions, and the normative assumptions that inform regulatory agendas. The paper argues that AI governance is shaped not only through formal regulatory processes, but through historically embedded and asymmetrical power relations, with knowledge production serving as a key observable dimension. The paper concludes by outlining alternative pathways for AI governance that foreground epistemic inclusion, capacity-building, and the strengthening of regional research ecosystems.
Paper short abstract
Research on AI policy argues that desirable futures are found in policy. The role of disruptive claims in normalising AI remains under explored. This paper reads Stiegler’s concept of disruption through STS to argue that disruption alters ideas of the future and enables state ideas of technology.
Paper long abstract
The deployment of AI in society has been accompanied by claims of disruption across multiple government ministries and organisations. Generalised disruption is claimed across key sectors of society, making spaces for technological adoption. This paper examines the concept of disruption in AI policies between the Global North and the Global South as a call to specific visions of the future, where AI is normalised and essential to enhance the functioning of society. This paper utilises the concept of Socio-technical Imaginaries (SI), which emphasises policy analysis as a means to understand desirable futures. Research has already discussed SI in AI strategy contexts. However, the role of disruption in AI policy and desirable futures remains underexplored, despite disruption being discussed as a component of emerging technologies (Hopster, 2021). This paper synthesises Bernard Stiegler’s disruption, an epochal event that renders citizens in algorithmic and data societies unable to visualise the future (Stiegler, 2019), with the state-governed visions of futures proposed by SI research. This paper argues through Stiegler and SI, that conceptualising disruption should include its potential to rationalise technology adoption, especially for citizens in algorithmic societies. The paper expands on current theorisations of disruption and citizen-state relations discussed within Stiegler’s work.
Hopster, J. (2021). What are socially disruptive technologies? Technology in Society, 67, 101750. https://doi.org/10.1016/j.techsoc.2021.101750
Stiegler, B. (with Ross, D., Jugnon, A., & Nancy, J.-L.). (2019). The Age of Disruption: Technology and Madness in Computational Capitalism (Reprinted). Polity Press.