Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Yousif Hassan
(University of Michigan - Ann Arbor)
Kwame Edwin Otu (Georgetown University)
Kebene Wodajo (ETH Zurich)
Jia Hui Lee (University of Bayreuth)
Laila Hussein Moustafa (University of Illinois)
Send message to Convenors
- Chairs:
-
Yousif Hassan
(University of Michigan - Ann Arbor)
Jia Hui Lee (University of Bayreuth)
- Discussants:
-
Kebene Wodajo
(ETH Zurich)
Laila Hussein Moustafa (University of Illinois)
Kwame Edwin Otu (Georgetown University)
- Format:
- Traditional Open Panel
- Location:
- HG-11A24
- Sessions:
- Tuesday 16 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
This panel explores the questions of what analytical frameworks for the social study of AI and data might look like when they are shaped by knowledges and experiences from Africa and how understandings of concepts such as intelligence, learning, and computing are contested in an African context.
Long Abstract:
This panel grapples with two interrelated questions: What might analytical frameworks for the social study of Artificial Intelligence (AI), machine learning, and data science look like when they are shaped by knowledges and experiences from Africa? How are understandings of key concepts, such as intelligence, learning, data, digitality, virtuality, artificiality, computing, and so forth contested in an African context? Several scholars in African studies, anthropology, information science, and other allied fields have argued for analyses of technology that do not take for granted systems of knowledge based in Europe and the West (Archambault 2017; Hassan 2022; Newell and Pype 2021; Nyabola 2018). Africans and other communities in the Majority World have been grappling with the social and political effects of AI and data science. Kenyan employees have reported experiencing post-traumatic stress disorder as a result of labeling datasets and moderating AI content for OpenAI and Facebook. Mobile phone companies continue to collect and use personal information of its African users while expanding demand for its uses. Several African cities have embarked on projects of ‘smartness,’ including rolling out digital platforms for government services, monitoring environmental change with sensors, and supporting innovation hubs, such as Yabacon Valley and Konza Technopolis. As some quarters embrace digitization, others have warned of a new algorithmic (Birhane 2020) or digital colonization. We invite papers that examine African experiences of AI and data that intervene into discussions about labor, ethics, environmental impact, governance, practices of data collection, and innovation. We encourage, but not limited to, presentations that articulate how African experiences of AI and data might contribute to analytical frameworks for a critical study of AI and data.
Accepted papers:
Session 1 Tuesday 16 July, 2024, -Paper long abstract:
The innovation-turn in the humanitarian enterprise, fueled by a big data gold rush, has seen the desire to impose ‘safety’ and ‘order’ become the rationales for exploitative experimentation, creating a dichotomy between risky bodies (subject to risky, innovative technologies in contexts of heightened uncertainty) and safe bodies (subject to tested technologies in safe environments) bodies (Jacobsen 2015).
The push for, and resistance against, forms of techno-solutionism during the Ebola epidemic offers a compelling case to explore the relationship between AI, big data technologies and shifting notions of risk. Implicit in the opposition between humanitarian enterprises' techno-solutionism and the politics of refusal adopted by African actors are differing notions of risk tied to AI, big data, and biopolitics – risky African bodies in need of technological interventions versus risky technologies and big data that disenfranchise. Drawing from the frameworks of data colonialism (Couldry and Meijas 2015), techno-colonialism (Madinou 2018), and Foucauldian readings of risk (1978), the objective of this paper is to understand this dichotomy. How do emergency AI and big data technologies transform or (re)frame discourses of risk when transposed? How are they resisted?
Using Call Detail Records as a case study, the paper reflects on it as a socio-technical big data technology that converges data-oriented global networks, market logistics, international humanitarian infrastructures, and state actors to bring about new power arrangements and dynamics (Madianou 2019). It considers its function as an emergency media (Ellcessor 2022) that reshapes how societies understand, manage, and experience emergencies (Meier 2015; Dufield 2019).
Paper short abstract:
The article maps a discrepancy between the structural nature of risks that emanate from current digital ID programmes, with a focus on Africa and the more individualistic conceptual foundation of the right to privacy. Building on scholarships on group privacy and social value of privacy it will show the inadequacy of the individualistic orientation
Paper long abstract:
The article maps a contradiction between the structural nature of risks that emanate from current digital ID programmes (with focus on Africa) and the more individualistic conceptual foundation of the right to privacy. Building on existing scholarships on group privacy and social value of privacy, the article will show the inadequacy of the individualistic right to privacy in the context of big data driven public and private decision-making. It will then introduce an alternative perspective that builds on the current privacy debate through Africa’s people-centric view of rights.
Paper short abstract:
The paper offers an empirically grounded, and theoretically informed, account of how AI-powered urban surveillance in Africa builds on long-term histories of control and segregation, rather than being simply influenced by the latest advances by AI superpowers.
Paper long abstract:
Smart cities incorporate some of the most critical tensions introduced by advancement in AI, in relation to their social and political consequences. They can inform and improve the provision of services – e.g., through better traffic and transport management – but they can also be used as instruments of surveillance and control. Their duality has been a source of increasing concerns, not only in authoritarian states, accused to exploit them as ways to expand their gaze in return for better services (Hoffman, 2022), but also in more open societies.
This study compares two AI-powered law enforcements projects deployed in Johannesburg and Cape Town, in South Africa. The first, the Safe City project developed by China’s Huawei in Johannesburg, links cameras to form wide area networks that use artificial intelligence (AI) to index, sort and interpret data pooled into centralised surveillance-based “nerve centres”. The second, the Shot Spotter project implemented by US company SoundThinking in Cape Town, uses smart sensors and AI to help law enforcement agencies collecting evidence on gun shooting incidents and help detecting patterns of gun violence. The structured comparison of the two projects allows moving beyond simplistic dichotomies between supposedly democratic and authoritarian uses of AI in urban environments. It illustrates instead the complex interactions between the socio-technical imageries originating in the countries from which specific technological solutions originate (China and the US) and the longer histories of attempts by South Africa’s city administrations to seize digital technologies to fight crime and improve service delivery.
Paper short abstract:
The paper critically explores the deployment of Artificial Intelligence (AI) in higher education across Africa.
Paper long abstract:
This paper examines the deployment of artificial intelligence (AI) in higher education across Africa, emphasizing the potential benefits and the critical challenges of inadequate data privacy protections. It argues that the transformative promise of AI as a tool for modernization and educational advancement is significantly compromised by the practices of private corporations that fail to safeguard data privacy, leading to a new form of digital colonialism through data mining.
The study highlights instances in Kenya where media reports have raised concerns over privacy violations due to unchecked AI applications. These incidents underscore the urgent need for robust data privacy frameworks to protect individuals’ rights and preserve their autonomy against digital exploitation. By juxtaposing Kenya’s and Tunisia’s roles as emerging African AI hubs, the paper calls for a decolonization approach to using AI in higher education. This approach advocates establishing comprehensive data privacy policies alongside empowering legal and educational measures to mitigate the risks associated with hAI technologies.
The objective is to ensure that AI empowers African societies, facilitating educational excellence while firmly upholding the principles of data privacy and sovereignty.
Paper short abstract:
Queer African digital art offer interventions into ongoing conversations about the political and colonial dangers of AI.
Paper long abstract:
Artists in Africa have increasingly used augmented and virtual reality and other digital rendering tools to produce art that creatively and safely express queer identities. Digital spaces and tools have enabled a kind of “queer agency” that protects queer creators from censorship, stigma, discrimination and/or violence in the real world (Mwangi 2014). This talk surveys several queer digital artworks and the kinds of challenge they pose to the structures of digitality, including the binary code. In producing art that move beyond the gender binary, African queer artists practice a (de)coding that enables safer and more inclusive digital relationships. Such safer and more inclusive digital practices are drawn from existing queer experiences that require discretion, anonymity, and self-preservation as part of survival (Sloan 2023). This paper also considers how might queer interventions by African artists can contribute to discussions about the increasing "irrelevance of the human" (Nhemachena 2019) as AI renders people inconsequential through automation, or reducible to data points for extraction (Zuboff 2019). Queer African digital art offer interventions into ongoing conversations about the political and colonial dangers of AI (Birhane 2020; Kwet 2019).
Paper short abstract:
This paper examines how Al policy and AI platforms focused on smart farming in Western Cape simultaneously pixelate plant beings and convert gendered bodies into binary forms of expression, thus limiting AI futures in South Africa.
Paper long abstract:
Drawing upon feminist STS, critical plant studies, and STS in/of Africa, this paper examines how Al policy and platforms focused on smart farming in the Western Cape simultaneously pixelate plant beings and convert gendered bodies into binary forms of expression, which are more easily “readable” by the logics of computers and the law, but in turn, limit futures of care for plant environments and peoples of all racialized genders. Based upon ethnographic research, Agri-tech companies in the Western Cape are developing AI technologies to help farmers maximize profits through crop monitoring, which ideally reduces reliance on fertilizers and water but, in doing so, converts apples into pixelated dark blue squares of overperforming versus light blue squares of underperforming fruit, thus reinforcing understandings of plants as mere resources for extraction. At the same time, the 2020 South African Report of the Presidential Commission on the Fourth Industrial Revolution seeks to challenge global flows of knowledge production of West to the rest by positioning South Africa as a site of innovation for AI-based technologies, but it does so through an emphasis on social wellbeing as “economic competitiveness” and an attention to gender that reduces gendered relations to a computational binary logic of sex ratios and male/female, which limits transformations for AI across the continent that enables social justice. Concluding, this paper contends that a robust vision of the governing of South African AI futures must exceed the economic, and attend to relations of intersectional gender and multi-species justice.
Paper short abstract:
While the transformation of matter through fire doesn’t go unnoticed, radiation postpones patients’ fear of the impact of Gamma rays. The unknowability of radiation and its silence(s) as well as visual invisibility subverts responsibility and accountability in a radiotherapy unit in Rabat, Morocco.
Paper long abstract:
Unlike fire, radiation is invisible. It penetrates the skin, burns and shrinks tumors. Radiation, dosimetry, and artificially intelligent algorithms for radiotherapy treatment as “materialities” or processes are not easy to grasp for all cancer patients. Besides the technician’s voice announcing the beginning and the end of the treatment through a microphone, cancer patients are faced with silence during their daily radiotherapy sessions. Based on ethnographic fieldwork in a ‘smart’ oncology private clinic in Rabat, Morocco, this paper centers the narratives of two protagonists, Hamid and Nadia, to illustrate the ways in which waiting “inside” and “outside” the “machine” emerges as an active correspondence between religious prayers and the commitment to attributing healing to Divinity and God. The anthropology of the unknowable centers silence as one of its paramount elements. While silence complicates interpretation (Weller, 2017), perception (or lack thereof) calls for mediative tools. What happens in moments in which human sensoriality becomes limited?
The paper centers the argument that while the transformation of matter through fire doesn’t go unnoticed, radiation postpones patients’ fear of the impact of Gamma rays. The unknowability of radiation and its silence(s) as well as visual invisibility subverts responsibility and accountability inside the radiotherapy unit. By engaging and considering human-machine interaction, the paper shall explore a landscape of techno-solutionism that increases alienation for cancer patients, both in understanding and feeling cared for. As such, bringing further into view the visible and invisible frontiers of technology and radiotherapy planning algorithms that mediate cancer patients’ care trajectories.
Paper short abstract:
This proposal reports on how Senegal is developing national regulations and strategies for the development and usage of digital technology and will focus on how the Senegalese youth perceive the tensions concerning Artificial Intelligence, collective deliberation, and decision-making processes.
Paper long abstract:
Artificial Intelligence (AI), Responsible Research Innovation (RRI), and Science and Technology Studies (STS) are becoming focal points to foster social technology frameworks in the Global South. As the evolution of AI garners global attention, a noticeable imbalance persists between Hemispheres. Northern corporations often treat the Global South as an outsourcing option for practices that may not meet the ethical standards of Europe and North America. Regions such as South America, Southeast Asia, and Africa bear the brunt of these operations, impacting citizens who find themselves constrained by practices that exploit local companies and social assemblages. In response to these challenges, countries like Senegal, Kenya, and South Africa, are asserting themselves by developing regulations and standards to establish optimal parameters for digital work and development. Focusing on Senegal, this proposal delves into its Digital Strategy, scrutinizing the tensions surrounding AI in the field of Evidence-Based Decision Making, elaborating on how Senegalese youth perceive the relationship between digital development, collective deliberation, and decision-making. Leveraging the framework for decolonizing transformation in non-Western and Southern innovation and technology (TnWiST), the proposal seeks to report on how Senegal can emerge as a global reference in decolonizing digital systems for sustainable development. Additionally, this analysis will present the insights gathered from the survey "Cartographie des écosystèmes d'innovation au Sénégal", understanding the imaginaries of Senegalese youth concerning AI. Through this proposal, the goal is to establish a common ground for African countries to engage in discussions about further possibilities for technology decolonization.
Paper short abstract:
I discuss sociotechnical visions of African scientists and practitioners working on NLP models of local Indigenous languages. I examine their discourses and practices and highlight how histories of epistemological decolonization are informing some of local AI development approaches in the continent.
Paper long abstract:
With the global push for technological diffusion of AI, machine learning, and big data to the Global South/Majority World, Africa finds herself once again pushed into contested discourses and fragmented practices of playing catchup with the West. These narratives subscribe to dominant conceptions of progress and development and assumptions of human and technological lacking that long described the continent. However, a growing number of African scholars and scientists are asking different questions that challenge old orthodoxies of technological innovation and economic and social development. They are attempting to rethink AI technology in the local context while reimagining the ways in which technology can contribute to the prosperity of their local communities.
In this presentation, I explore their sociotechnical visions of AI development, drawing on multi-sited ethnographic work across multiple African countries. I follow African scientists and practitioners working on natural language processing models for African Indigenous languages and examine their discourses and practices through the analytical lens of decoloniality in Africa.
Historically, decolonization meant the complete overthrow of colonial structures, institutions, and ideas of Western modernity in post-colonial Africa. I highlight how these histories are informing their AI local development efforts and inspiring different AI approaches using epistemological underpinnings based on African communal practices and Indigenous ways of knowing and being in the World. I argue for the need to look at histories of decolonization in the continent to restore African knowledges and cultural heritage in the current moment of technoscientific capitalism and global AI development.