Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Nikolaus Poechhacker
(University of Klagenfurt)
Roger von Laufenberg (VICESSE Research GmbH - Vienna Centre for Societal Security)
Send message to Convenors
- Format:
- Traditional Open Panel
- Location:
- HG-15A16
- Sessions:
- Thursday 18 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
We aim to reflect how the democratic state is being transformed by the integration of AI in its institutions. Specifically, we are interested to discuss this on the level of (institutionalized) practices, including theoretical reflections, empirical cases, and practical or theoretical interventions.
Long Abstract:
Democracy may not survive the ongoing digital transformation. An ubiquitous surveillance regime combined with new forms of digital control will erode the necessary protection of citizens from the state. Further, algorithmic recommender systems are splintering the public sphere and making an informed, transparent, and open discourse impossible. At least that is a prominent narrative on the complex relationship of democracy and AI, which however, runs into conceptual issues: What does this mean on a level of practices? How are central institutions of democracy integrating, reacting, and enacting AI as a tool for and in democracy? Furthermore, democracy as a concept is and was always hard to pinpoint. In this panel, we aim to shift the perspective on the relation between AI and democracy, and reflect how the democratic state is being transformed by the ongoing integration of AI in its central institutions.
We thereby understand the democratic state as being enacted in everyday practices that connect a heterogeneous set of actors. These relations and their interdependencies are in question, when AI and machine learning techniques are being introduced. This includes all areas of the democratic state, like policing (Egbert & Leese), public administration and policy making (Winthereik), welfare regimes (Allhutter et al.), or the legal system (Hildebrandt) and addresses questions on how democracy is practically done (Birbak & Papazu), how machine learning is adapted to democratic values (Poechhacker) or how practically enacted rationalities of these institutions are changing. We want to reflect on the question, how the transformation of the democratic state is being made and done with and through AI by multiple actors.
We welcome all contributions that reflect on the relation between technological and institutional transformation within the digital democratic state, including theoretical reflections, empirical cases, and practical or theoretical interventions (including moments of resistance or theoretical subversion).
Accepted papers:
Session 1 Thursday 18 July, 2024, -Paper short abstract:
We offer an empirically grounded account of the current functions of AI discourse and the promise of AI for political systems and interpret these findings in the light of the crises of contemporary capitalism. We underpin our thesis with empirical data on public opinion on AI in Germany.
Paper long abstract:
It is a commonplace theme to lament the potential long-term dangers of Aritificial Intelligence (AI) for democratic states. Less often the potential of AI to deepen democracy is emphasized. But even more rarely the actual, contemporary effects of AI technology and AI discourse on democratic systems are analyzed. Unlike speculations about future courses of development, we can do empirical social scientific research on the latter question. In our contribution we offer an empirically grounded account of the current functions of AI discourse and the promise of AI for political systems and interpret these findings in the light of the crises of contemporary capitalism. We underpin our thesis with empirical data on public opinion on AI in Germany.
Against this backdrop, we interpret content-analytic findings from two distinct data sources. First, a set of articles (N = 2,073) was sampled from 12 national print, online and broadcast media that address digitization and AI in pre and post-election coverage on the 2021 German federal election. A quantitative content analysis of these provides insights into the political role assigned to the topics of digitization and AI, with particular focus on risk-benefit as well as problem-solution assessments, articulated needs for political action and arguments justifying their use. Second, a continuous collection of the coverage about AI is sampled since January 2021 from 34 German online and print media.
We thereby hope to show the ideological and legitimatroy function of AI for contemporary (german) democracy.
Paper short abstract:
This contribution investigates how and why technocratic rather than democratic discourses dominate public discussions about AI governance. To explain why even deeply political questions of AI governance tend to be portrayed as technocratic ones, it draws attention to unequal distribution of power.
Paper long abstract:
While the development and use of Artificial Intelligence (AI) in governance raise profound democratic questions about participation, justice, and power, dominant discourses tend to portray AI as purely technocratic matter. Even deeply political issues in public discussions about AI tend to be presented as technocratic ones. Instead of opening up democratic questions of AI governance, policy makers repeat the mantra that ‘we have to get AI governance right to increase the benefits and mitigate risks.’ Democratic questions about what are benefits and for whom are often sidelined and neglected. Similarly, a popular discourse of using AI to solve major societal challenges of our times in areas such as environment and health tends to present AI as a quick technological fix to complex and uncertain ‘wicked problems’ rather than focusing on wide ranging participation of diverse actors in tackling socio-technical problems in a collaborative and inclusive manner. Against this background, this contribution investigates how and why technocratic rather than democratic discourses dominate public discussions about AI governance. It demonstrates how this discourse privileges narrow technical expertise over broader focus on social and political perspectives. To explain the dominance of technocratic discourse, this contribution draws attention to unequal power distribution in the field of AI where not only economic and technical but also political and discursive power is highly concentrated in a small number of big tech companies. Empirically, it will draw on documents on AI governance issued by governments, international organizations, consultancies, and civil society organizations.
Paper short abstract:
Utilising the STS lens of "constitutionalist coproduction", this paper examines this paper analyzes tacit and yet profound transformations in the world both in the cognitive and the political realm with the production of knowledge and regulation of AI.
Paper long abstract:
AI is deeply entangled with politics and geopolitics, as evident in the Cambridge Analytica scandal, proliferating discourse of "AI race" as well as the sociotechnical imaginaries which link AI progress with national achievement across cultures. However, both AI's ontological statues and AI systems' impacts are currently under intense contestation. These uncertainties around AI have become a particularly thorny issue when regulators attempt to tame the emerging knowledge and products with instruments and practices such as ethical guidelines, risk-based regulatory frameworks, standards and auditing processes. Controversies unfold in China, the EU and the US in different ways.
By analyzing policy documents, interviews with experts, and public controversies in China, the EU and the US through a co-productionist lens, this paper analyzes tacit and yet profound transformations in the world both in the cognitive and the political realm with the production of knowledge and regulation of AI. It examines these constitutive processes by asking three questions: how do the fundamental political organizations as well as institutional and procedural arrangements in science and technology policy making in the different political entities shape the epistemic approaches to understanding AI, its risks and solutions to problems? 2. How are the identification and categorization of AI and associated risks created to maintain certain socio-political orders in China, the EU and the US? 3. During a time of heightened geopolitical rifts and ideological conflicts, how can the co-productionist interpretive approach contribute to the ongoing efforts to form a global regime of AI governance?
Paper short abstract:
The contribution discusses the outcomes of the empirical study that combines survey and AI audit methods to investigate how Swiss citizens use AI-driven search engines to find information about popular votes and whether specific groups of citizens are more likely to be exposed to AI bias.
Paper long abstract:
AI-driven platforms, such as search engines, play an increasingly important role in how citizens find and consume politics-related information in liberal democracies. The functionality of these platforms is affected by various system factors, including the degree to which they use contextual information or randomise their outputs. However, despite the importance of system factors, it is also integral to account for the agency of use who interact with AI, especially as these interactions have major implications for the quality of information provided by AI. To better understand how user agency can affect the role of AI in the context of democratic decision-making, we empirically investigate how Swiss citizens use AI-driven web search engines, Google and Bing, to find information about federal-level popular votes in the spring of 2024. Using a large-scale survey, we first investigate how the choice of search queries used by a representative sample of Swiss citizens to find information about votes is shaped by their political views, education, and attitudes toward a voted issue. Then, we use the survey data to conduct a virtual agent-based AI audit and simulate search behaviour of Swiss citizens to systematically investigate whether certain groups of citizens are more likely to be exposed to AI bias (e.g. in terms of selection of information about votes being disproportionately influenced by opinions of specific political parties). The findings of the study will contribute to better understanding of the interaction between system- and user-side factors of AI systems and their implications for democratic decision-making.
Paper short abstract:
Drawing on observations from an ongoing project, and comments from members of this consortium, this presentation reflects on issues of democratic accountability and the complexities of integrating different work logics and practices into the development of predictive policing tools.
Paper long abstract:
This presentation contributes to the long tradition of predictive policing studies by focusing on the early stages of developing AI tools, and in particular on computer science engineers and programmers. It introduces an ongoing R&D project and explores the complexities of integrating different work logics and practices into the development of predictive policing tools. Not only does this project involve the development of algorithms at different levels of maturity, but it also brings together engineers and developers, members of different police departments, legal practitioners, and social sciences and ethics scholars with their associated practices and worldviews. Moreover, as a publicly funded development, it carries with it the additional expectation of societal contribution, and transparency and explainability of its results.
References to predictive policing are abundant, as the vast array of literature shows (Egbert & Leese, 2021; Kaufmann, Egbert & Lesse, 2019), but there is less emphasis on the socio-technical and procedural aspects of the development of these systems (Fest et al., 2023). Although studies such as those by Lally (2022) or Widder & Nafus (2023) emphasise how the abstract knowledge practices typical of the software industry contribute to the compartmentalised and isolated developments that AI systems entail, how accountability is, or can be, enacted in the development of AIs is a missing element. Drawing on observations from an ongoing project, and comments from members of this consortium, this paper reflects on how the need for democratic accountability can be embedded in the project and its members’ actions and work.