to star items.

Accepted Paper

Mapping public values in AI safety research  
Cian O'Donovan (University College London) Jack Stilgoe (UCL) Liu Canhui (University College London) Noortje Marres (University of Warwick)

Send message to Authors

Paper short abstract

As political, social and financial capital flows towards an AI race, it seems obvious to ask where we are running to, and might alternative destinations be more appropriate? Confronting issues of directionality in AI safety research, we compare innovation trajectories with what societies need.

Paper long abstract

As political, social and financial capital flows towards an AI race, it seems obvious to ask where we are running to, and might alternative destinations be more appropriate for public investment? A robust social contract for science needs attention to the purposes of innovation as well its processes (Sarewitz 2016). These questions of directionality are now familiar to STS scholars (Stirling 2024). But what Mulgan (2025) has called the ‘more-ism’ of innovation policy has narrowed approaches to evaluating science and technology from the standpoint of many other disciplines.

We take this concern with AI upstream, to look at the role that AI safety research (Lazar and Nelson 2023; Ahmed et al. 2024; Gyevnár and Kasirzadeh 2025) might play in shaping trajectories of AI innovation. We probe a gap in contemporary metascience agendas around mapping directionality. We discuss directionality in terms of public values and the extent to which processes and outputs of science, technology and innovation meet diverse societal needs. Bridging bibliometric and qualitative data, we ask what are the imagined public values expressed in AI safety research and how are societal needs framed in this research?

Traditional Open Panel P198
Critical metascience
  Session 1