AI and Automation in Evidence Synthesis: An Investigation of Methods Employed in Cochrane, Campbell Collaboration, and Environmental Evidence Reviews
Kristen Scotti
(Carnegie Mellon University Libraries)
Melanie Gainey
Haoyong Lan
(Carnegie Mellon University)
Sarah Young
(Carnegie Mellon University)
Short abstract
Machine learning use in evidence synthesis is growing, rising from 0.6% (2018) to 12.8% (2024). Of 2,253 reviews, ~5% reported ML, mostly for screening (~95%), with underreporting concerns. Few studies applied ML beyond screening. Standardized reporting is needed for transparency and rigor.
Long abstract
Evidence synthesis (ES) aggregates and evaluates research to enhance applicability, inform evidence-based practices, identify knowledge gaps, and guide policy. It supports decision-making and advances scientific consensus across disciplines but typically requires significant human effort. The growing volume of research has compounded these demands, prompting interest in integrating machine learning (ML) to improve efficiency in ES tasks. This study examines reported use of ML in evidence syntheses published in the Cochrane Database of Systematic Reviews, Campbell Systematic Reviews, and Environmental Evidence from 2017 to 2024. Of 2,253 studies analyzed, ~89% were from Cochrane, ~7% from Campbell, and ~4% from Environmental Evidence. The use of ML was explicitly reported in only ~5% of studies, primarily for screening (~95%). Few studies applied ML to other review stages, with four reporting it for search and one each for data extraction and analysis. Only one study reported ML use across multiple stages (search and screening). The first reported ML usage appeared in 2018 (~0.6% of studies), rising to 12.8% in 2024, representing a 2033% increase over six years. While 642 studies (~28%) reported use of ML-enabled tools for screening, only ~18% of those explicitly reported the use of ML functionalities, raising concerns about underreporting. Additionally, only ~6% of ML-reporting studies noted potential biases or limitations inherent to ML techniques. These findings highlight the need for standardized reporting guidelines to ensure transparency and reproducibility in ML-assisted evidence synthesis. Reducing time and effort while maintaining methodological rigor is essential for integrating ML into ES workflows.
Accepted Paper
Short abstract
Long abstract
Evidence synthesis (ES) aggregates and evaluates research to enhance applicability, inform evidence-based practices, identify knowledge gaps, and guide policy. It supports decision-making and advances scientific consensus across disciplines but typically requires significant human effort. The growing volume of research has compounded these demands, prompting interest in integrating machine learning (ML) to improve efficiency in ES tasks. This study examines reported use of ML in evidence syntheses published in the Cochrane Database of Systematic Reviews, Campbell Systematic Reviews, and Environmental Evidence from 2017 to 2024. Of 2,253 studies analyzed, ~89% were from Cochrane, ~7% from Campbell, and ~4% from Environmental Evidence. The use of ML was explicitly reported in only ~5% of studies, primarily for screening (~95%). Few studies applied ML to other review stages, with four reporting it for search and one each for data extraction and analysis. Only one study reported ML use across multiple stages (search and screening). The first reported ML usage appeared in 2018 (~0.6% of studies), rising to 12.8% in 2024, representing a 2033% increase over six years. While 642 studies (~28%) reported use of ML-enabled tools for screening, only ~18% of those explicitly reported the use of ML functionalities, raising concerns about underreporting. Additionally, only ~6% of ML-reporting studies noted potential biases or limitations inherent to ML techniques. These findings highlight the need for standardized reporting guidelines to ensure transparency and reproducibility in ML-assisted evidence synthesis. Reducing time and effort while maintaining methodological rigor is essential for integrating ML into ES workflows.
Synthezisers: metascience for meta-analysis
Session 1 Tuesday 1 July, 2025, -