Accepted Paper
Paper short abstract
Western-trained AI models risk "contextual failure" in the Global South. This paper argues current safety benchmarks ignore SADC's epidemiological reality. We propose a "Minimum Viable Audit" framework to protect digital health sovereignty and prevent algorithmic harm in resource-scarce settings.
Paper long abstract
While the Global North focuses on the existential risks of Artificial General Intelligence (AGI), the Global South faces the immediate risk of "Algorithmic Dumping", the deployment of unverified, Western-centric AI tools into fragile health systems. This paper challenges the assumption that "aligned" models are universally safe.
We introduce the concept of "Contextual Failure," arguing that Large Language Models (LLMs) trained on NHS or US clinical data will hallucinate or provide dangerous advice when faced with the "resource friction" of Southern African healthcare (e.g., suggesting treatments requiring ICUs that do not exist).
Using the SADC region as a case study, we analyze three critical gaps:
1. Epidemiological Mismatch: The bias of training data toward Western disease presentations.
2. Linguistic Drift: The failure of models to parse local clinical shorthand.
3. Infrastructural Friction: The safety risks introduced by low-connectivity deployment.
We conclude by proposing a "Minimum Viable Audit" (MVA) framework. This policy tool would empower African Ministries of Health to demand specific "Safety Certificates" from AI vendors, shifting the burden of proof from the resource-constrained buyer to the technology provider. This moves the debate from abstract "Digital Rights" to concrete "Digital Safety.
Digital rights, governance, and development futures in the global South