Log in to star items.
- Convenors:
-
Jack Stilgoe
(UCL)
Noortje Marres (University of Warwick)
Ismael Rafols (Universitat Politècnica de València)
Tommaso Ciarli (UNU-MERIT, United Nations University)
Cian O'Donovan (University College London)
Send message to Convenors
- Chair:
-
Jack Stilgoe
(UCL)
- Format:
- Traditional Open Panel
Short Abstract
This panel is about new approaches to mapping and metascience as tools for opening up science and innovation
Description
Research in Science and Technology Studies have historically been entangled with ideas of the ‘Science of Science’, dating back at least to Eugene Garfield (1955) and Derek De Solla Price (1963). The renewed interest in ‘metascience’ (see, for example, the recent Metascience 2025 conference in London) offers opportunities and challenges for those in STS interested in constructivist, interpretive and critical approaches. The policy salience of new science and technology, and AI in particular, is growing, and with it comes a demand for new tools of justification. Many current users of metascience approaches are more interested in questions of speed, with innovation seen as an end in itself, than in questions of direction (see Fortunato et al (2018) for example. We think new approaches to, for example, scientometrics and topic modelling can be used to open up emerging science and innovation to scrutiny.
This panel will be run by members of the project team for the UKRI Metascience project on Public Values in AI Research (PAIR). The PAIR team are developing new approaches to understanding emerging AI research, its embedded values and its potential to address public value questions, building on approaches from Sarewitz and Bozeman, Yegros and others.
We invite papers using, developing or analysing metascientific approaches to open up new possibilities for innovation and innovation policy, particularly in the area of AI research.
Fortunato, S., Bergstrom, C. T., Börner, K., Evans, J. A., Helbing, D., Milojević, S., ... & Barabási, A. L. (2018). Science of science. Science, 359(6379).
Garfield, E. (1955) Citation indexes for science; a new dimension in documentation through association of ideas. Science 122, 108–111
Price, D. J. D. S. (1963). Little science, big science. Columbia university press.
Accepted papers
Session 1Paper short abstract
Public discourse often treats AI as a unified, high-stakes object of controversy. We propose deflationary metascience as a counter-strategy to decompose the AI monolith into precise controversies grounded in concrete contexts, enabling a more democratic scrutiny of AI innovation and policy.
Paper long abstract
Public and policy discourse around artificial intelligence often treats ‘AI’ as a unified, high-stakes object of controversy (Suchman, 2023). We put this assumption to the test by applying controversy mapping methods to scientific discourse and visualising it, revealing a landscape that is fragmented and largely uncontroversial.
Building on a semantic and visual analysis of over two million abstracts on AI, algorithms, and machine learning, we show that algorithms mostly appear as solutions to specific problems, while references to AI in general and explicit controversies are marginal. This finding lies at the base of the Grounding AI Map, a 100m² walkable visualisation of AI-related scientific literature, exhibited at the Danish Technical Museum and Forum Groningen.
The map performs deflation in practice: instead of inflating the AI entity as a singular controversial matter, it systematically decomposes it into thousands of situated, often mundane applications, from medical diagnostics to bus arrival predictions. The exhibition design reinforces this move by denying visitors an overview from above, and requiring interpretation of local clusters by physically moving through them and reading about them. Here, the audience never encounters “AI as such”.
Deflation here is not intended as a depoliticising gesture, but a strategy for metascience: by first breaking down “AI” into its infrastructural roles across fields, controversy mapping can open up more precise, situated, and democratic questions for innovation policy and public engagement.
Paper short abstract
As political, social and financial capital flows towards an AI race, it seems obvious to ask where we are running to, and might alternative destinations be more appropriate? Confronting issues of directionality in AI safety research, we compare innovation trajectories with what societies need.
Paper long abstract
As political, social and financial capital flows towards an AI race, it seems obvious to ask where we are running to, and might alternative destinations be more appropriate for public investment? A robust social contract for science needs attention to the purposes of innovation as well its processes (Sarewitz 2016). These questions of directionality are now familiar to STS scholars (Stirling 2024). But what Mulgan (2025) has called the ‘more-ism’ of innovation policy has narrowed approaches to evaluating science and technology from the standpoint of many other disciplines.
We take this concern with AI upstream, to look at the role that AI safety research (Lazar and Nelson 2023; Ahmed et al. 2024; Gyevnár and Kasirzadeh 2025) might play in shaping trajectories of AI innovation. We probe a gap in contemporary metascience agendas around mapping directionality. We discuss directionality in terms of public values and the extent to which processes and outputs of science, technology and innovation meet diverse societal needs. Bridging bibliometric and qualitative data, we ask what are the imagined public values expressed in AI safety research and how are societal needs framed in this research?
Paper short abstract
From specialist algorithms to LLMs, synthetic data is permeating research ecosystems, even positioned as a national asset in the UK's AI Action Plan. This talk examines interplay between synthetic data/models, researchers, and institutional contexts, examining how these shape epistemic cultures.
Paper long abstract
Artificial Intelligence (AI)-generated synthetic data is permeating science and innovation ecosystems, from shaping the development of novel medical algorithms to forming outputs generated by general-purpose Large Language Models (LLMs). Synthetic data is even positioned as a national asset, as illustrated in the UK government’s 2025 AI Action Plan. This talk examines the interplay between the materiality of synthetic data (and models), researcher perspectives and decision-making, and broader institutional contexts and infrastructures, examining how these co-shape practice and policy.
In my discussion of synthetic data, I draw on findings from a UKRI AI metascience fellowship project investigating how synthetic data is reshaping research practices and cultures, “Synthetic Metascience: Tracing Artificial Intelligence-generated epistemic shifts in research practice and cultures". I expand the concept of AI representation coils (Bennett, Catanzariti and Tollon 2025) to synthetic data practices, examining how values, epistemic assumptions and materials/resources interact to form material and epistemic feedback loops. Specifically, I look at how synthetic data shapes epistemic cultures and knowledge-building practices in medical research, and how medical research epistemologies feed into how synthetic data is used and understood. Crucially, I consider how insights from AI metascience research can be translated into practical tools to inform policy and practice.
Paper short abstract
This paper explores the AI-biomedicine intersection, revealing a disparity between scientific discourse and material practice. We highlight how a benchmarking culture prioritises scoring over utility, whilst a long-tail effect centralises focus, fragmenting local biomedical knowledge.
Paper long abstract
This study critically examines the intersection of AI and biomedicine, exploring a structural disparity between scientific discourse and material practice. We analysed a corpus of 44,286 metadata records (from Europe PMC, Crossref, PubMed, arXiv, bioRxiv, and medRxiv) and 1,388 full-text papers (from Europe PMC and PMC) to map the network of biomedical models and datasets. We distinguish between a "MENTIONS" layer—representing discourse—and a "LINKS" layer representing practical, code-level relationships.
Our findings highlight a culture of benchmarking. Within practical links, evaluation-based connections account for 57.2%, heavily outweighing foundational training (40.4%). This suggests a tendency to prioritise performance metrics on standardised datasets, potentially neglecting real-world practical utility. Consequently, models may appear highly effective in metrics but perform poorly in actual application. Furthermore, the network exhibits a long-tail effect: 56.9% of model nodes and 64.8% of dataset nodes function as single-use entities. This reveals a centralised yet fragmented landscape; whilst research concentrates on a few renowned datasets, "local knowledge" risks marginalisation. For instance, rare disease data from local hospitals may struggle to enter mainstream datasets.
Ultimately, this structural disparity points to a potential gap between the technologies broadly discussed in the literature to construct legitimacy or chase trends, and the material operations actively performed by researchers.
Note: The quantitative findings presented herein are based on preliminary tests utilising Qwen-series models. The formal, updated results will be incorporated into the actual conference presentation.
Paper short abstract
This paper introduces The Box World, a participatory and creative tool that experiments with qualitative metascientific mapping. The approach complements existing mapping methods while inviting reflection on their assumptions and implications for opening science and responsible innovation.
Paper long abstract
Metascience seeks to understand how science works by examining its practices, structures, and knowledge production systems. Mapping approaches play an important role in this effort by making patterns in scientific activity visible and opening the ‘black box’ of science to scrutiny. Such mapping increasingly relies on quantitative techniques to analyse large publication datasets.
These approaches reveal patterns in scientific activity and provide valuable insights. However, the structures metascience seeks to map are themselves produced within the same scientific systems and paradigms from which these analytical tools emerge. When research systems are operationalized through publication data, citation relations, and measurable indicators, certain institutional dynamics, epistemic assumptions, cultural habits, and everyday practices remain less visible. Mapping methods may therefore reproduce aspects of the paradigms and infrastructures they seek to critically interrogate. This raises a broader question for critical metascience: how might mapping move beyond entrenched paradigmatic boundaries, and what new insights emerge when these approaches and their assumptions become objects of metascientific scrutiny?
This paper introduces Box World, a participatory and qualitative mapping approach. The tool invites participants to collaboratively explore and map the “boxes” structuring research systems, including disciplinary boundaries, funding logics, evaluation regimes, institutional incentives, and dominant innovation narratives, making underlying structures that shape research systems and innovation visible. The session briefly demonstrates the method and uses it to reflect on metascientific mapping, presenting Box World as an experimental way to examine the paradigmatic ‘boxes’, limits, and assumptions of existing approaches while opening science and innovation to scrutiny.
Paper short abstract
In this paper, we examine priorities in AI research for agriculture as revealed in publications and patents, against societal demands on agriculture as expressed in legal, policy and social documents.
Paper long abstract
Large investments in artificial intelligence (AI) are often justified by their potential contribution to the Sustainable Development Goals (SDGs). However, research and innovation rarely benefit all parts of society equally. This paper examines the public values of research in the context of AI applications in agriculture. Two dimensions are central: alignment, referring to the extent to which the distribution of research topics corresponds to the distribution of societal demands or needs, and appropriateness, referring to whether research outputs can effectively address those needs within specific socio-technical contexts. Focusing on AI-enabled agricultural innovation, we ask: who benefits from advances in AI for agriculture? To address this question, we develop a framework that compares the supply of research with the distribution of societal demands. Research priorities are measured through publications, patents, using topic modelling and domain thesauri (CABI, AGROVOC). Societal needs are derived from policy documents (FAOlex, Overton), news and policy debates, and social media signals. Appropriateness is assessed by linking AI research topics to contextual factors such as crops, land use, climate conditions, infrastructure, and data availability. By comparing the distribution of AI-agriculture research with policy priorities and needs expressed by actors such as farmers, policymakers, and the public, the study maps how the benefits of AI research are distributed across countries and social groups. The paper contributes to the development of quantitative tools that can help in making visible the (mis)alignment between research priorities in emerging technologies and societal demands.
Paper short abstract
The rise of PubPeer has reshaped scientific accountability. Analyzing 8,121 author responses (2013–2025), we reveal diverse strategies— denial, acknowledgment, challenge, constructive engagement, etc. —highlighting evolving norms of transparency and public accountability in science.
Paper long abstract
The rise of post-publication review platforms, especially PubPeer, have redefined the landscape of scientific accountability. These platforms have created new online venues where publications are scrutinized, but also actively defended. These reviews can lead to corrections or retractions if serious errors or proven misconduct undermine results. In this context, author’s responses to criticisms provides a critical window into the evolving norms of scientific practice and accountability. This talk explores how authors have addressed PubPeer critiques since the platform’s launch. The analysis relies on an original dataset — 8,121 first author responses between 2013 and 2025 — combining PubPeer discussions with bibliometric and institutional metadata. Using semantic approach based on sentence embeddings, we focus on the first response posted by an author following a first comment in a discussion thread.
Our findings reveal a growing adoption of PubPeer as an innovative mechanism of accountability within the scientific community. Our presentation focuses on the nature of the responses generated by the public challenge represented by the opening of a discussion thread. Beyond the initial choice of whether or not to respond to a comment, authors have a varied repertoire of interaction strategies at their disposal. For example, they can acknowledge receipt without continuing the debate, challenge or deny the relevance of the comment, engage in a constructive exchange by responding to criticisms, etc. For the first time, our systematic analysis provides insight into the strategies adopted by authors, revealing the normative and interactional logics underlying their responses to these public challenges.
Paper short abstract
Interdisciplinary and transdisciplinary (ITD) meta-research remains under-theorised and is often carried out implicitly or informally. This paper examines, through a cultural-critical lens, the pathways through which ITD meta-research emerges and how it can be consolidated into a research programme.
Paper long abstract
Meta-research—broadly defined as the systematic study of research practices, processes, and structures—has gained increasing relevance across disciplines seeking to improve the quality, transparency, and impact of knowledge production. However, within the growing interdisciplinary and transdisciplinary (ITD) scholarships, meta-research remains under-theorised and fragmented, often carried out implicitly or informally. While ITD projects frequently reflect on their own processes, few studies have examined how such reflection evolves into systematic meta-research or what it means to inhabit the role of an ITD meta-researcher. To address this gap, this paper synthesises the findings of a five-year research project that investigated, through a cultural-critical lens, the distinct trajectories leading into ITD meta-research. Using an ethnographic approach, I examine who engages in these studies, illuminating the diverse epistemic, institutional, and personal pathways through which ITD meta-research emerges.
Recent debates in both meta-research and ITD scholarship have underscored the need to develop cumulative, theory-informed understandings of how knowledge integration and collaboration unfold in practice. Understanding the defining features of ITD meta-research is essential for advancing the professionalisation of ITD research and expertise, as meta-research enables scholars and institutions to learn from the conditions, methodologies, and infrastructures that sustain reflexive inquiry. Ultimately, this analysis contributes to broader debates on the future of meta-research, including the role of accompanying research in ITD contexts. By systematising lessons drawn from my own experiences, I aim to help researchers and research organisations better articulate how learning from practice can become an integral, rigorous, and cumulative component of ITD science.