Accepted Poster

Unlocking Evidence Synthesis: The REX-QA (Review EXtraction and Quality Assessment) Data Corpus  
Paweł Jemioło

Paper Short Abstract

Evidence syntheses generate vast amounts of structured data through extraction, quality assessment, and GRADE evaluations — yet these data remain largely inaccessible. REX-QA aims to collect, standardize, and share these datasets to enhance AI training, reproducibility, and decision-making.

Paper Abstract

Systematic reviews and guideline development require extensive data extraction as well as quality and certainty of evidence assessment (e.g., GRADE). However, these structured datasets are rarely shared beyond the original research teams, limiting their utility for metascience, artificial intelligence (AI) training, and automation in evidence synthesis. The lack of transparency also makes it difficult to evaluate consistency in assessments across different teams.

REX-QA (Review EXtraction and Quality Assessment) data corpus is an initiative to collect, standardize, and openly share these valuable datasets. By gathering data from completed systematic reviews and guideline projects, REX-QA (https://osf.io/9pbrs/) promotes transparency, reproducibility, and machine learning applications in evidence synthesis. We also propose a standardized data extraction template to facilitate structured data storage in accessible formats (e.g., CSV).

To launch this effort, we are sharing data from our own previous studies, demonstrating the feasibility and benefits of structured data sharing. We welcome contributions of extracted data both before and after team consensus, allowing researchers to analyze variations in evidence assessments and grading practices. This can help identify inconsistencies, improve standardization, and refine AI-assisted evidence appraisal.

In the era of large language models (LLMs) and AI-driven systematic reviews, structured, high-quality datasets are essential for training reliable models. REX-QA provides a foundation for improving AI-powered evidence synthesis while promoting open science. We invite researchers, systematic reviewers, and AI developers to contribute data, refine standards, and explore innovative applications, fostering a more transparent and efficient future in evidence synthesis.

Panel Poster01
Poster session
  Session 1 Tuesday 1 July, 2025, -