T4.5


Synthezisers: metascience for meta-analysis 
Convenor:
Peter Kolarz (Research on Research Institute (RoRI))
Chair:
Peter Kolarz (Research on Research Institute (RoRI))
Format:
Panel
Location:
Sessions:
Tuesday 1 July, -
Time zone: Europe/London

Short Abstract

Evidence synthesis is often placed on the top of the hierarchy of scientific evidence and there is an increasing amount of meta-analyses and systematic reviews produced every year. This session will focus on new tools, methods and the credibility of evidence synthesis.

Long Abstract

This session highlights innovations and challenges in evidence synthesis, meta-analysis, and reproducibility tools. Robert Emprechtinger introduces metaHelper, an R package and web app that streamlines statistical transformations in meta-analysis, featuring effect size conversions and an evaluation via a randomized controlled trial. Sean Smith presents SOLES—a Systematic Online Living Evidence Summary—leveraging machine-assisted screening, AI-driven annotations, and community input to track interventions for improving reproducibility. Kristen Scotti talks about a large-scale review of 2,253 systematic reviews reveals a sharp rise in machine learning use for screening (from 0.6% in 2018 to 12.8% in 2024), though broader ML applications remain rare and underreported. Thomas Starck reports on a meta-research study of 6,294 Cochrane reviews shows that only 7% of evidence gradings were rated high quality, with no improvement over 15 years. Kinga Bierwiaczonek examines heterogeneity reporting in 1,207 psychological meta-analyses, finding that 22–41% omit heterogeneity entirely, and when reported, it is often ignored in conclusions. Finally, Maximilian Frank proposes a new metadata standard in scientific publishing aims to embed key research elements—such as hypotheses and test statistics—in machine-readable formats to support automated synthesis and transparency.

Accepted papers

Session 1 Tuesday 1 July, 2025, -