Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Nils Matzner
(Technical University Munich)
Ina Möller (Wageningen University and Research)
Send message to Convenors
- Chair:
-
Nils Matzner
(Technical University Munich)
- Discussant:
-
Ina Möller
(Wageningen University and Research)
- Format:
- Traditional Open Panel
- Location:
- NU-2B11
- Sessions:
- Wednesday 17 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
Anticipatory Governance is used to build capacity incrementally and to evaluate narratives that shape emerging governance pathways. This session aims at bringing together the many usages of Anticipatory Governance as a normative concept, an analytical framework, and an actual practice.
Long Abstract:
Disruptive technologies, major accidents, and crises such as the pandemic strongly impact societies. To prepare for such events, STS scholars have put forward the concept of Anticipatory Governance as a practical and critical way of looking into possible futures. The concept is used to build capacity incrementally and to evaluate narratives that shape emerging governance pathways. It shares similarities to concepts like Responsible Research and Innovation and reflexive governance.
This session aims at bringing together the many usages of Anticipatory Governance as a normative concept, an analytical framework, and an actual practice.
Examples for topics are
- Anticipatory Governance applied to emerging technologies such as Artificial Intelligence, geoengineering, or biotechnologies
- Developing the concept of Anticipatory Governance further in comparison to Responsible Research and Innovation, Technology Assessment, reflexive governance and other similar concepts
- Modes of implementation into research, regulation, and public discourses
Accepted papers:
Session 1 Wednesday 17 July, 2024, -Short abstract:
This contribution aims to provide a theoretical analysis of the main assumptions of the "Anticipatory Governance" framework as characterized in STS, focusing on the critical-reflective affordances it seeks to enact.
Long abstract:
The concept of ‘Anticipatory Governance' (AG) in the STS domain encompasses a set of practices —both de facto and interventionally activated by STS scholars— designed to build capacities for foresight, engagement, and integration. These capacities are conceived to be extended through society, and their proactive strengthening is intended to enable the possibility of acting "on a variety of inputs to manage emerging knowledge-based technologies while such management is still possible" (Barben et al., 2008; Guston, 2014).
This paper aims to provide a theoretical analysis of the main assumptions of the AG normative approach/framework, focusing on the critical-reflective affordances it seeks to enact. After a brief contextualization of the emergence of AG and a non-exhaustive review of the existing literature on/from this framework, I will define some basic guiding variables for assessing the critical-reflective affordances or spaces of problematization that this (and other normative frameworks) might open up. Subsequently, I will explore how AG, as it is characterized in STS, tends to position itself with respect to each of the previously proposed variables, thereby assessing AG's theoretical critical-reflective affordances. The purpose of this exercise is to highlight some of the virtues and limitations of AG, emphasizing points where AG's potential critical-reflective affordances could be strengthened to promote more robust science and technology activities.
Short abstract:
We aim to increase understanding of (challenges in) the anticipatory governance of medical technologies, by describing and critically reflecting on the perspectives of industry actors, including the (dis)alignment of HTA as an anticipatory practice.
Long abstract:
The governance of medical technologies occurs in a network of actors, dispersed at regional, national, and supranational levels. Within this network, the companies that market medical technologies are an important stakeholder. Governance activities influence medical technology companies, by fostering certain technological innovations and not others. At the same time, the development of innovative medical technologies shapes the governance required. In this paper we explore this interaction between innovative medical technologies and attempts to govern them by investigating how medical technology industry actors navigate the (changing) governance landscape. Whereas more attempts have been made to understand governance practices, relatively little is known about the position and practices of industry actors themselves. We conduct in-depth, semi-structured interviews with (Dutch or European) industry actors, including actors from different strata of industry (ranging from start-ups to multinational companies), to gain a heterogenous perspective. Our analysis focuses on respondents’ experiences surrounding evidence generation, experiences with new regulations, and expectations of governing bodies like health technology assessment (HTA) agencies. Conceptually, our analysis builds on literature on anticipatory governance. This growing body of research theorizes the way in which actors involved in governing ‘the future’ conceptualize futures and actions in the present. We reflect on the legitimacy and adaptability of HTA (as an anticipatory practice) to the context of medical technologies, in terms of values, decision-making procedures, timelines, etc. In doing so, we aim to increase our understanding of challenges and formulate policy recommendations for the governance of medical technology.
Short abstract:
Despite risks, big-tech narratives propagate that the opacity of black box systems make it impossible to regulate generative AI. This paper examines the narratives that have shaped the regulatory past of high frequency trading algorithms to stipulate new regulatory imaginaries for generative AI.
Long abstract:
Generative AI is transforming the digital media landscape (e.g. Jungherr 2023; Lee HK 2022). However, the future of generative AI is far from certain and depends on our capacity to imagine AI as a regulatory object and anticipate its long-term impacts and risks. While we have seen attempts at regulating AI with various intentions across the US, the EU and China, the opacity and complexity of AI is understood as a barrier to understand and regulate these systems (Ferrari et al. 2023).
While opacity may create an impasse to imagine AI as regulatory object, the regulation of high frequency trading (HRT) algorithms illustrates that it is possible for regulators to abandon the idea that an algorithm needs to be ‘opened’ in order to be regulated (Seyfert 2021).
This paper examines the narratives that have shaped the regulatory past of HRT algorithms to stipulate new regulatory imaginaries for generative AI. It does so by drawing parallels between the anticipated risks of HRT algorithms prior to the global financial crisis and present controversies surrounding generative AI. These parallels focus in particularly on the narratives of opaque systems and technological developments outpacing regulatory efforts. While doing this shows that these narratives work to supress regulatory possibilities in mitigating risks, regulatory efforts after the financial crisis reveal that these can only be suppressed for as long as risk remains uncertain. For the anticipatory governance of generative AI, it is therefore crucial to create participatory practices that illustrate anticipated risks for reflective decision making.
Short abstract:
This paper deconstructs proposals for risk-risk analysis as an anticipatory approach to evaluate the desirability of development of geoengineering. It identifies practical and ethical shortcomings in extant proposals, discusses their implications, and proposes ways to make such analysis useful.
Long abstract:
In the face of rapidly growing climate harms, research into solar geoengineering promises possibilities of averting some of the risks of otherwise unavoidable climate change. Yet the technology would also bring novel risks. Risk-risk, or risk trade-off analysis has been proposed as an appropriate anticipatory approach to evaluate the desirability of development of solar geoengineering. This paper examines the discursive implications of risk-based approaches to climate policy, and deconstructs extant proposals for risk trade-off analysis of policy options. It argues that such proposals construct a false binary between climate harm and geoengineering and rely on a consequentialist ‘lesser evil’ argument. In both respects the discourse fails to anticipate interaction effects between potential responses. Further, the discourse frames solar geoengineering as an ‘exceptional response’ to climate risk, yet paradoxically advocates evaluation using technocratic utilitarian risk calculus, rather than engaging with the securitisation and pre-emption implied by exceptional or emergency circumstances. The paper then discusses the implications of these shortcomings for anticipatory and precautionary governance of solar geoengineering, suggesting practical methodological improvements to risk-risk analysis. It concludes by making a case for rigorous consideration of the risks and benefits of a wider range of exceptional responses to climate change, effective anticipatory governance for any exceptional response, and the urgent development of broad public participation mechanisms for shaping responses to growing climate risk.
Short abstract:
Building on concepts concerning the anticipatory governance (AG) of emerging technologies, the paper explores challenges and limitations brought about by the much-needed attempts at expanding AG to explore opportunities for advancing both sustainable innovations and societal transformations.
Long abstract:
Building on concepts and cases concerning the research on, and practices dedicated to, the anticipatory governance (AG) of new and emerging fields of science and technology, such as nanotechnology and climate engineering, the paper explores some of the challenges and limitations that come along with the much-needed attempts at exploring the opportunities for expanding the capacities of AG to advance both sustainable innovations and societal transformations. The paper’s arguments concern the following key themes:
- First, I will review the initial discussion around technology assessment (TA) and AG, and present the core elements of AG developed in institutional experiments dedicated to facilitating endeavors of TA-based AG.
- Second, I will suggest that advancing the significance of AG requires an assessment of the existing institutional capacities of anticipation in governance, together with an exploration of how the governance of anticipatory capacities might be enhanced.
- Third, I will discuss some ideas to scale up and expand the reach of AG not only as regards emerging and potentially disruptive technologies but also as regards sustainable societal transformations.
- Fourth, I will conclude by presenting a potential selection of divergent case studies to illustrate the massive challenges ahead, for example because of the politics of expectations, knowledge, and practices involved.
Short abstract:
This paper examines recent trends in the digital tech industry to create new jobs and structures that integrate ethical concerns into their tech development processes and products, and asks how these developments represent new opportunities and challenges for anticipatory governance.
Long abstract:
Since its inception, a key challenge facing anticipatory governance, and the related program of responsible innovation, has been engaging with and enrolling the private sector in efforts to steer innovation towards more just and sustainable outcomes. This paper examines the digital tech industry - an industry that is critically important with the renewed interest in AI - specifically recent trends to create new jobs and structures that integrate ethical concerns into their tech development processes and products, and asks how these developments represent new opportunities and challenges for anticipatory governance. Our data come from an NSF-sponsored project that collected a set of 30 semi-structured interviews with hiring managers who were looking to fill what we refer to as 'ethical tech' jobs (ethics related roles like ethical hacker, director of responsible AI), and 'tech critics,' a category that includes people who used to work in the tech industry and specifically in roles related to ethics or responsibility. Our findings show how organizational structures, job precarity, culture, and leadership can limit the power of those working in roles relating to ethics and responsibility. Yet, respondents also shed light on strategies for educating and training future ethical tech workers that may help them overcome those challenges and insights for building 'integration' capacities that are essential to anticipatory governance. We conclude by reflecting on the implications of our findings given some of the most recent trends in tech industry, including job contraction and more prominent focus on the regulation of AI.
Short abstract:
The use of AI in synthetic biology challenges existing governance frameworks. This study systematically evaluates their assumptions and how adaptations can be made for future. We argue for a shift in research paradigms from ‘T+0’ to ‘T+1’, transitioning from reactive to anticipatory governance.
Long abstract:
The integration of AI in synthetic biology introduces significant uncertainty and complexity, posing challenges to existing governance frameworks. This study systematically examines and evaluates the assumptions of current governance frameworks for AI and synthetic biology, assessing their suitability and limitations in the context of their integration, and how adaptations can be made for future readiness. We argue for a shift in research paradigms and thinking from ‘T+0’ to ‘T+1’ when developing governance frameworks, transitioning from reactive to anticipatory governance, and from passive to proactive governance. By employing fact-based technology foresight, we construct hypothetical governance models to accommodate the integration trends of emerging technologies and their implications for societal governance structures. A concrete example in AI and synthetic biology is provided to illustrate this approach. This study underscores the necessity for governance structures to possess foresight, enabling effective responses in future scenarios to ensure governance remains effective and resilient. The aim of the study is to gain a deeper understanding of how the integration and advancement of emerging technologies can redefine governance paradigms, guiding decision-making in future governance practices.