Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Nikolaus Poechhacker
(University of Klagenfurt)
Roger von Laufenberg (VICESSE Research GmbH - Vienna Centre for Societal Security)
Send message to Convenors
- Format:
- Traditional Open Panel
Short Abstract:
We aim to reflect how the democratic state is being transformed by the integration of AI in its institutions. Specifically, we are interested to discuss this on the level of (institutionalized) practices, including theoretical reflections, empirical cases, and practical or theoretical interventions.
Long Abstract:
Democracy may not survive the ongoing digital transformation. An ubiquitous surveillance regime combined with new forms of digital control will erode the necessary protection of citizens from the state. Further, algorithmic recommender systems are splintering the public sphere and making an informed, transparent, and open discourse impossible. At least that is a prominent narrative on the complex relationship of democracy and AI, which however, runs into conceptual issues: What does this mean on a level of practices? How are central institutions of democracy integrating, reacting, and enacting AI as a tool for and in democracy? Furthermore, democracy as a concept is and was always hard to pinpoint. In this panel, we aim to shift the perspective on the relation between AI and democracy, and reflect how the democratic state is being transformed by the ongoing integration of AI in its central institutions.
We thereby understand the democratic state as being enacted in everyday practices that connect a heterogeneous set of actors. These relations and their interdependencies are in question, when AI and machine learning techniques are being introduced. This includes all areas of the democratic state, like policing (Egbert & Leese), public administration and policy making (Winthereik), welfare regimes (Allhutter et al.), or the legal system (Hildebrandt) and addresses questions on how democracy is practically done (Birbak & Papazu), how machine learning is adapted to democratic values (Poechhacker) or how practically enacted rationalities of these institutions are changing. We want to reflect on the question, how the transformation of the democratic state is being made and done with and through AI by multiple actors.
We welcome all contributions that reflect on the relation between technological and institutional transformation within the digital democratic state, including theoretical reflections, empirical cases, and practical or theoretical interventions (including moments of resistance or theoretical subversion).
Accepted papers:
Session 1Fabian Anicker (Heinrich-Heine-Universität) Frank Marcinkowski (University of Duesseldorf) Golo Flasshoff (Heinrich Heine University Düsseldorf)
Long abstract:
It is a commonplace theme to lament the potential long-term dangers of Aritificial Intelligence (AI) for democratic states. Less often the potential of AI to deepen democracy is emphasized. But even more rarely the actual, contemporary effects of AI technology and AI discourse on democratic systems are analyzed. Unlike speculations about future courses of development, we can do empirical social scientific research on the latter question. In our contribution we offer an empirically grounded account of the current functions of AI discourse and the promise of AI for political systems and interpret these findings in the light of the crises of contemporary capitalism. We underpin our thesis with empirical data on public opinion on AI in Germany.
Against this backdrop, we interpret content-analytic findings from two distinct data sources. First, a set of articles (N = 2,073) was sampled from 12 national print, online and broadcast media that address digitization and AI in pre and post-election coverage on the 2021 German federal election. A quantitative content analysis of these provides insights into the political role assigned to the topics of digitization and AI, with particular focus on risk-benefit as well as problem-solution assessments, articulated needs for political action and arguments justifying their use. Second, a continuous collection of the coverage about AI is sampled since January 2021 from 34 German online and print media.
We thereby hope to show the ideological and legitimatroy function of AI for contemporary (german) democracy.
Inga Ulnicane (University of Birmingham)
Long abstract:
While the development and use of Artificial Intelligence (AI) in governance raise profound democratic questions about participation, justice, and power, dominant discourses tend to portray AI as purely technocratic matter. Even deeply political issues in public discussions about AI tend to be presented as technocratic ones. Instead of opening up democratic questions of AI governance, policy makers repeat the mantra that ‘we have to get AI governance right to increase the benefits and mitigate risks.’ Democratic questions about what are benefits and for whom are often sidelined and neglected. Similarly, a popular discourse of using AI to solve major societal challenges of our times in areas such as environment and health tends to present AI as a quick technological fix to complex and uncertain ‘wicked problems’ rather than focusing on wide ranging participation of diverse actors in tackling socio-technical problems in a collaborative and inclusive manner. Against this background, this contribution investigates how and why technocratic rather than democratic discourses dominate public discussions about AI governance. It demonstrates how this discourse privileges narrow technical expertise over broader focus on social and political perspectives. To explain the dominance of technocratic discourse, this contribution draws attention to unequal power distribution in the field of AI where not only economic and technical but also political and discursive power is highly concentrated in a small number of big tech companies. Empirically, it will draw on documents on AI governance issued by governments, international organizations, consultancies, and civil society organizations.
Yishu Mao (Max Planck Institute for the History of Science)
Long abstract:
AI is deeply entangled with politics and geopolitics, as evident in the Cambridge Analytica scandal, proliferating discourse of "AI race" as well as the sociotechnical imaginaries which link AI progress with national achievement across cultures. However, both AI's ontological statues and AI systems' impacts are currently under intense contestation. These uncertainties around AI have become a particularly thorny issue when regulators attempt to tame the emerging knowledge and products with instruments and practices such as ethical guidelines, risk-based regulatory frameworks, standards and auditing processes. Controversies unfold in China, the EU and the US in different ways.
By analyzing policy documents, interviews with experts, and public controversies in China, the EU and the US through a co-productionist lens, this paper analyzes tacit and yet profound transformations in the world both in the cognitive and the political realm with the production of knowledge and regulation of AI. It examines these constitutive processes by asking three questions: how do the fundamental political organizations as well as institutional and procedural arrangements in science and technology policy making in the different political entities shape the epistemic approaches to understanding AI, its risks and solutions to problems? 2. How are the identification and categorization of AI and associated risks created to maintain certain socio-political orders in China, the EU and the US? 3. During a time of heightened geopolitical rifts and ideological conflicts, how can the co-productionist interpretive approach contribute to the ongoing efforts to form a global regime of AI governance?
Johan Buchholz (City of Munich)
Long abstract:
Public administrations cannot ignore the current debates on AI, including its opportunities and challenges. An open but important question is how implementing AI technologies can support and not undermine trust in public administration, which is an important institution in our current democratic landscape. The introduction of AI in this context is potentially a shift in role relationships, organisational structures and processes, and is accompanied by unintended consequences. Therefore, various efforts to ensure that AI technologies do what is expected of them are part of the presented empirical case of a large public administration in Germany.
In my contribution, I analyze and discuss different attempts to build trust in the context of AI technologies. One important but rather abstract document is a 'code of data ethics' that aims to provide guidance to actors inside and outside the city administration. The core values of responsibility, fairness and transparency may be shared by many actors, but they need to be put into practice when coding, testing and using. Therefore, this first guide is accompanied by an additional support structure that encourages discussion, reflection and exchange on issues related to the implementation of AI technologies in a democratic public administration. The dynamics of these interventions will be discussed, highlighting the importance of both more abstract guidelines and concrete interventions in AI development and implementation projects.
Victoria Vziatysheva (University of Bern) Maryna Sydorova (University of Bern) Mykola Makhortykh Vihang Jumle (Institute of Communication and Media Studies)
Long abstract:
AI-driven platforms, such as search engines, play an increasingly important role in how citizens find and consume politics-related information in liberal democracies. The functionality of these platforms is affected by various system factors, including the degree to which they use contextual information or randomise their outputs. However, despite the importance of system factors, it is also integral to account for the agency of use who interact with AI, especially as these interactions have major implications for the quality of information provided by AI. To better understand how user agency can affect the role of AI in the context of democratic decision-making, we empirically investigate how Swiss citizens use AI-driven web search engines, Google and Bing, to find information about federal-level popular votes in the spring of 2024. Using a large-scale survey, we first investigate how the choice of search queries used by a representative sample of Swiss citizens to find information about votes is shaped by their political views, education, and attitudes toward a voted issue. Then, we use the survey data to conduct a virtual agent-based AI audit and simulate search behaviour of Swiss citizens to systematically investigate whether certain groups of citizens are more likely to be exposed to AI bias (e.g. in terms of selection of information about votes being disproportionately influenced by opinions of specific political parties). The findings of the study will contribute to better understanding of the interaction between system- and user-side factors of AI systems and their implications for democratic decision-making.
Richa Kumar (Trilateral Research Limited)
Long abstract:
Contemporary democratic processes are heavily mediated in the digital space and it very often becomes vulnerable to threats like disinformation, misinformation and foreign information manipulation and interference (FIMI). Research currently states that there is a lack of empirical evidence on digital disinformation as a threat to democracy and it is instead, at best, a moral panic (Jungherr & Schroeder 2021). However, the European Commission funded project ATHENA states that several countries have exploited the Internet to wage campaigns of disinformation and interference in Europe to disrupt democratic processes for their perceived political and economic benefit. As a result, disinformation poses an imminent our infrastructures, economies, values and democracies. The present paper will present 30 manifestations of FIMI in hate speech, espionage, elite capture, LGBTIQ+ issues. The paper will provide interview data with relevant stakeholders such as journalists, politicians, election bodies and fact checkers. The paper will demonstrate how disinformation threatens the fundamentals of our democratic processes namely, fundamental freedoms, electoral equity, institutional independence and stability, further erosion of public trust and citizens’ participation in governance processes.
Sol Martinez Demarco (Harz University of Applied Sciences)
Long abstract:
This presentation contributes to the long tradition of predictive policing studies by focusing on the early stages of developing AI tools, and in particular on computer science engineers and programmers. It introduces an ongoing R&D project and explores the complexities of integrating different work logics and practices into the development of predictive policing tools. Not only does this project involve the development of algorithms at different levels of maturity, but it also brings together engineers and developers, members of different police departments, legal practitioners, and social sciences and ethics scholars with their associated practices and worldviews. Moreover, as a publicly funded development, it carries with it the additional expectation of societal contribution, and transparency and explainability of its results.
References to predictive policing are abundant, as the vast array of literature shows (Egbert & Leese, 2021; Kaufmann, Egbert & Lesse, 2019), but there is less emphasis on the socio-technical and procedural aspects of the development of these systems (Fest et al., 2023). Although studies such as those by Lally (2022) or Widder & Nafus (2023) emphasise how the abstract knowledge practices typical of the software industry contribute to the compartmentalised and isolated developments that AI systems entail, how accountability is, or can be, enacted in the development of AIs is a missing element. Drawing on observations from an ongoing project, and comments from members of this consortium, this paper reflects on how the need for democratic accountability can be embedded in the project and its members’ actions and work.