Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Gijs van Maanen
(tilburg university)
Daan Kolkman (Utrecht University)
Gert Meyers
Fran Meissner (University of Twente)
Linnet Taylor (Tilburg University)
Send message to Convenors
- Format:
- Traditional Open Panel
- Location:
- HG-09A29
- Sessions:
- Friday 19 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
This interdisciplinary panel collects qualitative studies of governmental algorithms representing humans and nonhuman actors and objects, and reflect on how their empirical work relates to governance and regulation-oriented work done in ethics, law, and policy.
Long Abstract:
Governments use various ‘algorithmic’ tools to represent both the humans (whether individuals or groups) and nonhumans (from trees, to water ways) present in their territories. One example here are the ‘digital twins’ developed and used by municipalities as input for decisions about what cities are and should become. Digital twins are not the only governmental attempts to map, model, imagine, and represent worlds. Recent discussions on the governance of governmental algorithms, however, seem to focus primarily on algorithmically ‘supported’ processes that result in tangible results - e.g. the ‘Ofqual’ scandal in the UK, or the COMPASS case in the US. Much attention here is spent on the reflection on and implementation of ethical-legal norms that aim to either prevent such scandals from occurring, or to allow governments to better ascribe blame or praise in case such efforts appeared to be in vain. Such ex-post evaluations of algorithmic processes resulting in impactful decisions, however, often do not help to acquire a better understanding of how such processes and models work. This is a shame, precisely because research done in STS shows that the ‘regulatory targets’ of such ethical-legal norms are fast moving targets. What ‘the’ algorithmic system or process is that deserves regulatory scrutiny is highly ambiguous.
This interdisciplinary panel aspires to contribute to discussions on algorithmic governance through the collection of qualitative studies of governmental algorithms that refer or represent governmental territory and its inhabitants. We especially invite scholars from various disciplines to reflect on how their empirical work relates and could relate to the ethical-legal questions that are asked by their more governance and regulation-focused colleagues working in fields like ethics, law, policy, public administration, or within governmental institutions themselves. How can and should STS position itself vis-a-vis practices of algorithmic governance in and outside of academia?
Accepted papers:
Session 1 Friday 19 July, 2024, -Paper short abstract:
How does the CS field vary in its engagement with ethical and social concerns of AI and other algorithmic systems? Using computational methods and millions of publications, this study shows a strong association among social identities, scientific capital, and topic selection within the field of CS.
Paper long abstract:
In the context of the burgeoning influence of AI and socio-technical systems, increasing attention from the public is directed towards the ethical and social implications associated with the wide utilization of algorithmic systems. What about within the field of Computer Science where individuals are closely engaged in technical advancements? How does the CS field vary in its engagement with addressing ethical considerations and investigating social concerns? This study employs computational methods and analyzes 5.4 million publications sourced from the Web of Science (WoS) to conduct a comprehensive bibliometric examination. We focus on the complex interplay among social identities, scientific capital, and topic selection within the realm of Computer Science, with a specific emphasis on research pertaining to the societal impact and ethics of algorithmic systems. The findings of this study reveal a noteworthy pattern: scholars belonging to women and racial minority groups show a higher likelihood of contributing to the discourse surrounding the ethical and social considerations of algorithmic systems. In contrast, white male scholars only show a tendency to explore the ethical reflections of algorithmic systems with an increase in their academic impact. This study underscores the significance of diversity within the scientific community, illustrating how a varied and inclusive scientific workforce contributes positively to the broader scientific ecosystem.
Paper short abstract:
As public management is increasingly aware of ethical issues in data and AI practices, they use a number of processes and tools to mitigate possible harms and to constitute accountability. Drawing from empirical field research in Dutch government organisations, this paper evaluates such practices
Paper long abstract:
The notorious child-benefit scandal has made Dutch government organisations more sensitive to ethical issues in AI and data practices. This manifests on national, regional and local level in critical reports, the creation of positions for ethics officers, the establishment of citizen and expert panels on ethics, and the development and use of tools for evaluating AI and data projects. One of these tools is the Fundamental Rights & Algorithms Impact Assessment (FRAIA), another is the evaluation framework for algorithms issued by the national audit office. In addition, the national government has launched a public register for government algorithms, in the hope it would constitute more transparency. This paper will revisit how government organisations make use of impact assessments, public registers, and evaluation frameworks and to what extend these practices facilitate responsible AI and data practices, and accountability. Using FRAIA, the authors of this paper had the opportunity to review a number of government algorithms together with the organisations intending to use or already using these algorithms. This provided information about algorithms in use or in the process of procurement and also to the practices of mitigating risks, the awareness to ethical and legal issues, and the capacities to carry out such impact assessments. Looking at the algorithm register then gave insights into practices of documentation and the flawed promise of transparency. In conclusion, this paper discusses how STS researcher can study these phenomena up close, and effectively intervene and help to advance good governance for data & AI.
Paper short abstract:
The paper focuses on the use of games engines to render and make visible the predictions of urban digital twins. We draw on critical games studies to trace the normalizing and distancing from the layeredness of models that the urban twin rendered as a game entails.
Paper long abstract:
Urban digital twins are increasingly pushed as a set of technologies to support urban planning. The digital twin is meant to allow for both the city’s virtual destruction and its real improvement. With twins the interfaces used in urban governance are changing. Games engines: a suite of tools originally designed and used to render gaming content are now used for the visualization of data derived from ensembles of models each with their own histories. Run-ins between embodied realities and serious games have shown that models used in everyday civic applications may be built upon assumptions that have the potential to elicit knee-jerk reactions and hasty (participatory) policymaking. This paper aims to highlight power asymmetries and the visibility of models and their logics and how these get hidden through hyperreal interfaces. We focus on what we can learn from the critical study of games for getting at some of the core questions of new entanglements between data, models and urban digital planning. In line with Galloway (2006), we argue that game engines are designed to create an illusion of ‘continuity’ rather than highlighting differences in the quality and quantity of data and models. Looking towards public facing twins we further consider notions like the digital sublime (Mosco, 2004) and the sense of magic these new forms of rendering entail. As such the paper traces the normalizing and distancing from the layeredness of models that the twin rendered as a game entails.
Paper short abstract:
This empirical study investigates the social and ethical implications of algorithm-based systems in India, focusing on a recent case in the state of Telangana.
Paper long abstract:
Algorithm-based systems have become increasingly prevalent in various sectors, including social welfare, with the promise of enhancing efficiency and accuracy in decision-making processes. However, the indiscriminate adoption of these systems can lead to unintended consequences, particularly concerning social and ethical implications. This empirical study focuses on the recent case in the Indian state of Telangana, where an algorithm-based system implemented in welfare schemes resulted in the wrongful denial of food to thousands of impoverished individuals.
The study employs a case-study approach to investigate the social and ethical implications of algorithm-based systems in the context of social welfare programs.
The primary objective of this study is to analyze how algorithm-based systems, particularly those utilizing artificial intelligence, contribute to social and ethical problems in the Indian context, using the Telangana case study as a focal point. The study aims to explore the mechanisms through which algorithmic decision-making processes may lead to unintended consequences, such as the wrongful denial of welfare benefits to eligible individuals. Additionally, the research seeks to identify potential strategies for addressing these issues and promoting more responsible and ethical use of algorithm-based systems in social welfare programs.
The study draws upon a variety of data sources, including government documents, reports, and data sets related to the implementation of the algorithm-based system in Telangana's welfare schemes. Additionally, interviews with government officials, welfare recipients, and other stakeholders provide valuable insights into the experiences and perspectives of those affected by the algorithmic decision-making process.
Paper short abstract:
This study investigates the shaping of AI governance through applications in governance frameworks and methods using an STS lens. We discuss 4 frameworks and 5 assessments and how ethics as governance of AI were framed by different involved actors.
Paper long abstract:
This study probes the diverse articulations of AI governance, focusing on how legal, ethical, and technological frameworks shape these. Many methods to govern AI exist, but the GDPR, AI Act, and Trustworthy AI Ethics guidelines are closing alternatives for AI governance. Parallel to these legal and ethical frameworks, different assessment lists and methods narrow down what AI governance entails.
Employing the lens of STS, the study views AI governance frameworks as mediation (Verbeek, 2006) through artefacts (Davis, 2020), emphasising their role in shaping user interactions and ethical considerations in AI governance. Each framework or method affords certain interactions, specific users and non-users and a selection of moral principles to include or exclude. Relevant social groups (Pinch & Bijker, 1984) are creating methods and frameworks that close what AI governance might mean.
The empirical research consists of projects with an AI governance need for a guideline, framework, or assessment. This resulted in co-creating four frameworks (one federal, two regional and one local) and five assessments of specific AI projects. Our analysis revealed that current AI governance frameworks prioritise certain ethical principles while marginalising others, leading to a homogenised understanding of AI governance. Imposing AI governance frameworks often leads to project owners perceiving them as tedious, mandatory tasks, detracting from the potential of AI governance to offer valuable insights and improvements for overlooked challenges and opportunities in projects.
Paper short abstract:
Departing from positivist data science in favor of a situated stance, we 1) examine colonial histories of domination and broader political economies, in which AI is shaped. Based on this, we 2) open a spectrum of alternative visions for AI, conceptualizing it as a material-semiotic web of practices.
Paper long abstract:
This paper presents a humanities perspective on the recent developments of generative AI, challenging positivist data science in favor of a situated stance (Haraway 1988).
The first half makes the contingency of power visible, examining the enduring colonial histories of domination shaping current advances in generative AI (Benjamin 2019). And it analyses the broader political economies in which AI is developed by large tech companies (Poell et al. 2019) to better understand, critique, and situate the current instantiations of generative AI—from data generation to model development, infrastructuring to standardization, evaluation to deployment, business models to ethico-political consequences.
The second half focuses on alternative visions. Starting with the possible knowledge cultures that AI may amplify, distribute, and generate, a situated approach to AI responds to varying needs and capabilities (Sen 1985). It incorporates local expertise to develop models that reflect context. Situated AI also depends on training data. Not through extraction or augmentation, but by empowering local actors to curate data on their own accounts, ideally leading to the creation of tailored applications, the multiplication of perspectives on what can be done and made computable and what should not be done.
Situating AI is crucial to expand our perspectives: From AI as a technological destiny, placed upon us by powerful actors, towards AI as a material-semiotic web of practices (Law 2019), thereby engendering a multiplication of imaginations for how to live together as a pluriversal collective, characterized by both: interdependence and mutual responsibility, and respect for profound otherness (Escobar 2019).
Paper short abstract:
The concept of illegalism, non-legal behavior as both a tactic and a strategy of projecting power, currently has little conceptual footing. In this paper, I argue that illegalism has new utility as an analytic concept in twenty-first century algorithmic governance.
Paper long abstract:
In his analysis of the concept in his lectures on the development of the “punitive society,” Michel Foucault describes the eighteenth century as a period of “systematic illegalism,” including both lower-class or popular illegalism and “the illegalism of the privileged, who evade the law through status, tolerance, and exception” (Foucault 2015, 142). In this paper, I argue that illegalism has new utility as an analytic concept in the twenty-first century. Illegalism is a characteristic of both the business models and rhetorical positioning of many contemporary digital media firms. Indeed, such “platform illegalism” is so rife that commentators often seem to accept it as a necessary aspect of Silicon Valley innovation.
In this presentation, I describe illegalism as theorized by Foucault and others and develop a theory of platform illegalism grounded in evolution of technical and business models under platform capitalism. This presentation is part of a larger project in which I document the prevalence of illegalism on the part of digital platforms in various arenas, focusing in particular on platform labor and generative AI; examine the range of responses to such illegalism from consumers, activists, and governments; and formulate recommendations regarding ways to account for platform illegalism in scholarly and activist responses as part of governance mechanisms for digitally mediated societies.
Foucault, Michel. 2015. The Punitive Society: Lectures at the Collège de France 1972-1973. Edited by Bernard E. Harcourt. Translated by Graham Burchell. New York: Picador.