Human and AI Assessed Reporting Quality as a Predictor of Raw Data Sharing: A Cross-sectional Study
Barbara Ćaćić
(Student, Faculty of Humanities and Social Sciences in Split)
Ivan Buljan
(Faculty of Humanities and Social Sciences in Split)
Paper Short Abstract
Transparency and reproducibility are known issues in science, yet open science practices are not always followed. By assessing reporting quality of cross-sectional psychological studies from 2023, we examine if it predicts raw data sharing and compare AI and human assessment.
Paper Abstract
Introduction: Open science practices, such as reporting quality and data sharing, are necessary for ensuring proper transparency and reproducibility. However, authors are often hesitant to share their data. With AI advancing, its role in simplifying research processes, such as data extraction, remains open for exploration. This study will examine whether reporting quality done by human predicts authors’ sharing of data differently than AI assessment.
Methods: Cross-sectional studies from 2023 from four Q1 psychological journals (two requiring STROBE guidelines and two not requiring them), are considered eligible. Emails requesting raw data will be sent to corresponding authors. The included articles’ reporting quality will be assessed by measuring the adherence to the STROBE guidelines. Two authors and ChatGPT 4o (OpenAI) will assess the reporting quality and human and AI assessment will be compared by calculating inter-rater reliability.
Results: Information concerning the adherence to STROBE guidelines from the 89 included articles will be extracted by the authors and the AI. A T-test will be used to assess whether there is a difference in reporting quality between those who have shared raw data and those who have not. Higher reporting quality of included articles is expected to be a predictor of sharing raw data. AI and human prediction is not expected to differ.
Conclusions: Findings could demonstrate the need for journals and other institutions to strengthen adherence to reporting guidelines and data sharing practices, and showcase the potential AI has in assisting with research processes to improve scientific practices.
Accepted Poster
Paper Short Abstract
Paper Abstract
Introduction: Open science practices, such as reporting quality and data sharing, are necessary for ensuring proper transparency and reproducibility. However, authors are often hesitant to share their data. With AI advancing, its role in simplifying research processes, such as data extraction, remains open for exploration. This study will examine whether reporting quality done by human predicts authors’ sharing of data differently than AI assessment.
Methods: Cross-sectional studies from 2023 from four Q1 psychological journals (two requiring STROBE guidelines and two not requiring them), are considered eligible. Emails requesting raw data will be sent to corresponding authors. The included articles’ reporting quality will be assessed by measuring the adherence to the STROBE guidelines. Two authors and ChatGPT 4o (OpenAI) will assess the reporting quality and human and AI assessment will be compared by calculating inter-rater reliability.
Results: Information concerning the adherence to STROBE guidelines from the 89 included articles will be extracted by the authors and the AI. A T-test will be used to assess whether there is a difference in reporting quality between those who have shared raw data and those who have not. Higher reporting quality of included articles is expected to be a predictor of sharing raw data. AI and human prediction is not expected to differ.
Conclusions: Findings could demonstrate the need for journals and other institutions to strengthen adherence to reporting guidelines and data sharing practices, and showcase the potential AI has in assisting with research processes to improve scientific practices.
Poster session
Session 1 Tuesday 1 July, 2025, -