Reproducibility is the cornerstone of the credibility and trustworthiness of science. This session will focus on factors that affect reproducibility of research, as well as tools, methods and initiatives that can help increase reproducibility.
Long Abstract
This session explores the evolving landscape of reproducibility and transparency in research across disciplines. *** investigates why replications, despite their corrective potential, often fail to influence research agendas—highlighting systemic disincentives, cognitive barriers, and dissemination challenges, and proposing tools to enhance their visibility and uptake. Rita Banzi presents findings from the OSIRIS Delphi study, where 82 international experts identified 32 key reproducibility checks and discussed their contextual adaptation. Addressing challenges in social media research, Philipp Knöpfle proposes solutions like synthetic data, shared repositories, and alternative replication strategies. Charlotte Rulkens talks about a case study replicating an art historical analysis of Rembrandt, offering a meta-perspective on the epistemic value of replication in the humanities. Dora Pejdo reports on the Horizon Europe iRISE project, a Delphi consultation identified “methodological quality” and “data management training” as top priorities for improving reproducibility.
Despite their promise for scientific self-correction, replications rarely influence research agendas. We examine systemic disincentives, social/cognitive barriers, and dissemination practices that limit their impact. Finally, we discuss pathways and tools to improve replication visibility and uptake
Long abstract
Replication is a cornerstone of cumulative science, providing essential insights into the reliability of research findings. Yet, despite an increasing number of published replications, they often fail to affect the belief in and use of original findings. Replications are infrequently cited, often disregarded in theoretical discussions, and—except for striking exceptions—fail to shift research agendas. This raises a critical question: If replication is fundamental to scientific self-correction, why do its results so often fail to shape knowledge?
In this talk, we explore conceptual and empirical explanations for the limited impact of replications. First, we examine systemic disincentives, including publication biases, prestige hierarchies, and career incentives that prioritize novelty over verification. Second, we discuss cognitive and social-psychological barriers that lead researchers to dismiss or selectively interpret replication results. Third, we highlight how current dissemination practices fail to integrate replications with original studies, making them difficult to find and underutilized.
While large-scale replication efforts have begun to shape research practices, individual replication studies often struggle to achieve similar influence. We argue that improving the visibility and uptake of replications requires more than increasing their number—it demands structural, cultural, and technological shifts in how they are disseminated and discovered. We conclude by discussing potential solutions, including technical tools we are developing as part of the UKRI Metascience programme, designed to improve the discoverability and integration of replication evidence into scientific discourse.
An international, multidisciplinary group of 82 experts in research reproducibility joined the OSIRIS Delphi study. After two online surveys and a consensus meeting, the group selected 32 checks considered important for research reproducibility and discussed adaptation of checks to various context.
Long abstract
Evidence-based solutions to improve reproducibility in research may help various stakeholders, such as researchers while planning their studies or reviewers during their evaluation. The Open Science to Increase Reproducibility in Science (OSIRIS) Delphi study aimed to develop a core set of items to be reviewed in research projects in multiple fields of sciences.
We invited a multidisciplinary group of experts in reproducibility to participate to two rounds of online survey to rate the importance of 44 reproducibility items prepared by the study Steering Committee. These items covered the whole life cycle of research project, from planning to dissemination. A third-round online consensus meeting was organized to vote on those items that did not reach consensus and discuss general themes.
Eighty-two participants from 21 countries responded to Round 1 (May-July 2024, response rate 91%) and 77 to Round 2 (September-October 2024, response rate 93%). Overall, the Delphi process led to the inclusion of 32 items, e.g., availability of data management and statistical plans, description of measures to mitigate bias, estimation of sample size, details on statistical analysis, software and code, findability and availability of datasets and codes. The group excluded six items (e.g., use of registered report, publication in open access journals) and did not reach consensus on seven items (e.g., pre-registration, description of authorship and contributorship).
These results suggest we have reached consensus on a set of reproducibility items considered important when conducting research. Strategies for implementing and adapting them across stages of research and scientific fields will be developed.
Social media data faces replicability challenges due to restricted access, platform variability, and limited transparency. To address these issues, we propose solutions, such as synthetic data use, development of shared research repositories, and adoption of alternative replication approaches.
Long abstract
Social media data from platforms such as Facebook, X, and TikTok can offer valuable insights into human behavior. It has, hence, become increasingly prominent in research in social and behavioral sciences (but also other scientific fields). However, recent shifts in data access policies—most notably the substantial restriction and monetization of data availability through Application Programming Interfaces (APIs) by platforms such as Facebook and X—have introduced significant barriers to ensuring the reproducibility and replicability of any research based on social media data. This presentation highlights the challenges and complexities of replicating studies with social media data, emphasizing key issues, such as restricted data access, limited data transparency, and temporal/contextual variability of platform content. Drawing on replication attempts in computational social science, we provide an overview of the current state of social media data replications as well as their most common barriers and present empirical evidence on the ephemerality and (non-)replicability of such data. We propose strategies for improving replicability, including an early and incremental preregistration of research, prospective replications, the use of synthetic/intermediate datasets, and detailed and transparent documentation of methods and data sources. We also advocate for collaborations between researchers, the development of shared research material repositories, and the adoption of alternative replication approaches, such as conceptual replications. By addressing these issues, this presentation contributes to a broader conversation on enhancing the reproducibility and replicability of research with social media data, ensuring that research remains robust in the face of a dynamic and volatile online media landscape.
Under the Horizon Europe iRISE project, a Delphi consultation study mapped the priorities for reproducibility measures and interventions with experts across research domains. ”Methodological quality” scored highest as a reproducibility measure, and “Data management training” as an intervention.
Long abstract
Background
Horizon Europe iRISE project aims to improve research reproducibility. A Delphi consensus study was conducted to gather expert input across research domains and stakeholder groups: policymakers, funders, editors, publishers, and researchers, about the priorities in addressing reproducibility measures and interventions.
Methods
The study involved 73 experts from 34 countries in Consultation Round 1 and 67 in Round 2. Using a 10-point Likert scale, reproducibility measures (n=14) and interventions (n=27) required at least 70% of panellists scoring items 8 or higher to reach a consensus. The panellists had the option to comment on their choices. Comments from Round 1 were summarised using content analysis and provided to the panellists in Round 2. The final panel to create prioritized lists is in planning.
Results
From Round 1, a total of 8 reproducibility measures reached the priority list, along with 6 interventions. The top three priorities for reproducibility measures were “Methodological quality – 9.03”, “Reporting quality – 9.00”, and “Code and data availability and re-use – 8.66”. The top three priorities for interventions were “Data management training – 8.52”, Data quality checks/feedback – 8.33”, and “Statistical training – 8.33”. After Round 2 there were no new reproducibility measures or interventions that reached consensus to be included in the priority list.
Conclusions
This study identified actionable areas for improving reproducibility and transparency by consulting experts from diverse disciplines. The final panel will create the final priority list. Study findings will benefit the scientific community by shaping concrete actions to address critical reproducibility challenges.
To examine the strengths and limitations of replication in the humanities, an art historical Rembrandt study was replicated. This paper takes a meta perspective on epistemic advantages suggested by our case study, as a contribution to the discussion on the value of replication in the humanities.
Long abstract
Motivated by the ‘replication crisis’ that deeply impacted the biomedical, natural, and social sciences, a case study was conducted to examine the strengths and limitations of replication studies in the humanities. More specifically, this involved a replication of an art historical Rembrandt attribution study. This paper presents the current status of the discussion on the feasibility and desirability of replication in the humanities and discusses the broader relevance of the case study’s findings for replication in the humanities. The Rembrandt replication revealed epistemic advantages of replication related to the past (contributing to the historiography of the study), the present (assessment of the trustworthiness of findings, enriching existing argumentation and improving methodologies and exchange of expertise), and the future (enhancing transparency and enabling future replicability). The paper discusses how different approaches to designing a replication study in the humanities can be aligned with specific aims. Making humanities studies suitable for future replication entails detailed documentation, preregistration, ensuring the availability of the obtained data, and explicating the study’s methodology. This not only creates conditions for replication, but replicating also increases epistemic impact and allows future researchers to build upon previous research more easily and rigorously. Furthermore, the transparency this entails improves accountability and the trustworthiness of the studies in question. Based on these reflections, it is concluded that replication in the humanities can be both feasible and desirable. Replicability should not be seen as a challenge to credibility or a requirement for quality, but as an important epistemic opportunity for the humanities.
Short Abstract
Reproducibility is the cornerstone of the credibility and trustworthiness of science. This session will focus on factors that affect reproducibility of research, as well as tools, methods and initiatives that can help increase reproducibility.
Long Abstract
This session explores the evolving landscape of reproducibility and transparency in research across disciplines. *** investigates why replications, despite their corrective potential, often fail to influence research agendas—highlighting systemic disincentives, cognitive barriers, and dissemination challenges, and proposing tools to enhance their visibility and uptake. Rita Banzi presents findings from the OSIRIS Delphi study, where 82 international experts identified 32 key reproducibility checks and discussed their contextual adaptation. Addressing challenges in social media research, Philipp Knöpfle proposes solutions like synthetic data, shared repositories, and alternative replication strategies. Charlotte Rulkens talks about a case study replicating an art historical analysis of Rembrandt, offering a meta-perspective on the epistemic value of replication in the humanities. Dora Pejdo reports on the Horizon Europe iRISE project, a Delphi consultation identified “methodological quality” and “data management training” as top priorities for improving reproducibility.
Accepted papers
Session 1 Tuesday 1 July, 2025, -