Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
David Leslie
(The Alan Turing Institute)
James Wright (The Alan Turing Institute)
Send message to Convenors
- Format:
- Panel
- Sessions:
- Tuesday 7 June, -
Time zone: Europe/London
Short Abstract:
This panel will look at how data justice movements and perspectives, particularly those that look beyond Western Europe and North America, are reshaping global debates on AI ethics and transforming its future from the ground up.
Long Abstract:
In recent years, narratives about an escalating “AI arms race” have become all-too-commonplace, portending a gloomy human future predetermined by escalating geopolitical sprints to some non-existent technoscientific finish line. At the same time, a parallel but more muted geopolitical race has intensified: a dash to develop international standards for AI ethics and governance. In an ideal world, the development of globally inclusive governance protocols and ethical standards would ensure that AI technologies are developed and implemented in “responsible” ways. Over the past decade, however, international policymaking and standards-setting ecosystems have largely been characterised by the “policy hegemony” of Western tech corporations and Global North geopolitical actors, who have asymmetrically wielded network power while simultaneously engaging in virtually unimpeded data capture and rent-seeking behaviour.
More recently, a growing body of data justice scholarship has confronted these power dynamics and reframed the ethical challenges of datafication through the social justice lens. This has spurred along a growing awareness in AI ethics of the sociotechnical dimensions of power and a greater focus on relations of inequity and extraction within and between societies. On the whole, this “data justice turn” may well signal the coalescence of an increasingly detranscendentalised AI ethics with closely aligned fields such as data feminism, design justice, data colonialism, and non-Western data ethics among others.
This panel will consider how data justice movements and perspectives, particularly those that look beyond Western Europe and North America, are reshaping global debates on AI ethics and transforming its future from the ground up.
Accepted papers:
Session 1 Tuesday 7 June, 2022, -Paper short abstract:
The historically powerful wield authority in data innovation ecosystems under the assumption that they are best positioned to solve the world's many problems. We argue for a "pluriversal data justice' that advances diverse forms of knowledge and experience in pursuit of collective well-being.
Paper long abstract:
The collection and use of data, and the presence of digital infrastructures, are undeniable features of contemporary life. And yet the maldistribution of their benefits and harms seems progressively more to concentrate wealth and power in a handful of nations and multinational corporations that extract value from and act with impunity upon the rest. Corresponding to this is a techno-triumphalist conceit: The historically powerful wield authority in data innovation ecosystems largely under the assumption that they are best positioned to solve the world's many problems, including those for which they bear responsibility. These include the destabilising and necrotic forces of environmental degradation, "major power" militarisation, and expanding socio-economic inequity. Where data-driven technologies reproduce these patterns, we argue for the urgency of a "pluriversal data justice" that advances diverse forms of knowledge and experience in pursuit of collective well-being.
A pluriversal data justice demands the integration of absent epistemologies into dominant modes of theory and practice by accounting for the 'sociology of absence'; the systematic exclusion of voices whose experience of coloniality and oppression are at odds with Global North claims of epistemic universalism and technological achievement. The collection and operationalisation of data strongly feature colonialist and extractive logics and the erasure or exoticisation of local experience. Pluriversal data justice reflexively relocates data practices and the experience of digital infrastructures as broadly inclusive and interculturally emancipatory, pointing the way towards an integrative epistemology suited to the work of ensuring the survival of humanity and achieving its collective liberation.
Paper short abstract:
Much tech-related human rights discourse concerns particular technologies and their use. However, the rights issues underlying digital infrastructures are less-considered. This paper explores the tensions between human rights with the global, yet regionally-bound, nature of digital infrastructures.
Paper long abstract:
There is a growing body of human rights literature regarding the design and use of AI. Human rights are universal, meaning they apply equally to everyone and indivisible, meaning that all human rights have equal status. They are also interdependent and interrelated, meaning the improvement of one right can facilitate the advancement of others, and vice versa. In practice, human rights are set out in international and domestic laws. Governments ratify international treaties and develop domestic laws to make human rights a reality for their local community.
The processing of data ultimately drives AI. This means that data infrastructure underpins data-driven applications and services such as AI. Digital infrastructures can adversely impact and infringe on human rights, where such violations may fall unevenly across different social, economic, and political lines between individuals and groups. Importantly, data infrastructures operate across jurisdictions, yet each legal system deals with data and such infrastructures according to local norms. That is, the global nature of digital infrastructures makes imposing particular governance and other norms challenging. Moreover, tensions may arise between the advancement of different rights, particularly when taking a human rights approach to governing digital infrastructures.
This paper therefore elaborates the tensions between the universal and global nature of human rights, with digital infrastructures that operate globally, yet have particular regional, cultural, political and jurisdictional groundings.
Paper short abstract:
Algorithms can now automatically generate data-driven narratives, but these so-called Natural Language Generation (NLG) tools are not neutral. Based on autoethnography, this paper will discuss socio-technical issues linked with GPT3 when the tool is used to generate politically sensitive narratives.
Paper long abstract:
Algorithms can now automatically generate data-driven narratives, but these so-called Natural Language Generation (NLG) tools are not neutral. Based on autoethnography, this paper will discuss socio-technical issues linked with GPT3 when the tool is used to generate politically sensitive narratives. Given any text prompt like a phrase or a sentence, GPT-3 returns a text completion in natural language. Recently, this tool (and an earlier version of it GPT2) has been used to write news articles or fictions. However, we know very little about the tool's capacity (what types of texts the tool will return) and why. This paper uses autoethnography to investigate what and how bias and prejudice and norms are embedded in the GPT3 tool by showing what is deemed as 'politically sensitive' or 'harmful' content by the tool. This method will transparentize the blackboxed AI algorithms, and shed lights on how using NLG AI to write (aka 'auto-writing' or 'robo-writing') may shape the types of narratives generated.