Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenor:
-
Casey Wimsatt
(Symbionica, LLC)
- Chairs:
-
Peter Hilpert
(University of Lausanne)
Julian Quandt (WU Vienna University of Economics and Business)
Johannes Breuer (GESIS - Leibniz Institute for the Social Sciences)
- Discussants:
-
Matthew Vowels
(CHUV)
Joaquin Gulloso
Ansgar Scherp (University of Ulm)
Jamie Cummins (University of Bern)
- Format:
- Pre-conference virtual symposium
Short Abstract:
Meta-scientists, publishers, and authors have for many years developed and used software tools to find issues in scientific papers before and after publication. This symposium will bring together different projects to discuss how the latest AI tools can be used and evaluated for this purpose.
Description:
For over a decade, forensic scientists/meta-scientists, publishers, and authors have developed and used software tools to find issues in scientific pre-prints and publications. The recent rise of Large Language Models (LLMs) provides these groups with a whole new range of tools. With this symposium, we will bring together people from four ongoing projects to discuss how the latest LLMs and other AI tools are being developed, evaluated, and used to detect errors in scientific publications: The Black Spatula Project, RegCheck, SocEnRep, and Psy-RAG. These projects involve people from different countries, and academic disciplines as well as people outside academia. The structures, pipelines, and goals of these projects differ. While the Black Spatula Project is an international grassroots initiative driven mostly by volunteer efforts, SocEnRep is a research and infrastructure project funded by the German Research Foundation (DFG), and RegCheck and Psy-RAG are also academic research projects based in Switzerland. The Black Spatula Project wants to make use of LLMs for detecting errors in scientific publications, SocEnRep develops a pipeline for automated reproducibility checks in economics and the social sciences, RegCheck compares preregistration documents to papers, and Psy-RAG combines LLMs and RAG to identify problematic patterns, inconsistencies, and error propagation in psychological research. What all of these projects have in common is that they use AI/LLMs for scalable and (semi-)automated checks of academic publications. We will first present preliminary results, the challenges ahead and potential solutions, with a moderated discussion at the end.
Register to attend | https://cos-io.zoom.us/webinar/register/WN_5j-VbV_iSIS1Kf8ve9PIKQ#/registration |