Accepted Paper

The Unreasonable Effectiveness of Open Science in AI - A Replication Study  
Odd Erik Gundersen (Norwegian University of Science and Technology) Odd Cappelen Martin Mølnå (Aneo)

Short abstract

We conducted replications of 22 AI studies that either publicly shared code and data or only data with a 50% success rate. Reproducibility increases to 86% when both code and data are shared, while it is reduced to 33% if only data is shared. Documenting data is more important than documenting code.

Long abstract

A reproducibility crisis has been reported in science, but the extent to which it affects AI research is not yet fully understood. Therefore, we performed a systematic replication study including 30 highly cited AI studies relying on original materials when available. In the end, eight articles were rejected because they required access to data or hardware that was practically impossible to acquire as part of the project. Six articles were successfully reproduced, while five were partially reproduced. In total, 50% of the articles included was reproduced to some extent. The availability of code and data correlate strongly with reproducibility, as 86% of articles that shared code and data were fully or partly reproduced, while this was true for 33% of articles that shared only data. The quality of the data documentation correlates with successful replication. Poorly documented or miss-specified data will probably result in unsuccessful replication. Surprisingly, the quality of the code documentation does not correlate with successful replication. Whether the code is poorly documented, partially missing, or not versioned is not important for successful replication, as long as the code is shared. This study emphasizes the effectiveness of open science and the importance of properly documenting data work.

Panel T3.6
Where next for replication, transparency and analysis of QRPs? (I)
  Session 1 Tuesday 1 July, 2025, -