How do different types of expert contribute in research assessment? An empirical analysis of 300 experts’ assessment behaviors
Fang Xu
(Chinese Academy of Sciences)
Yanlin CHENG
Xiaoxuan Li
Short abstract
We undertook a comparative investigation of the review behaviors and outcomes of nearly 300 experts who participated in the assessment of over 20 major S&T projects in 2024. This analysis was designed to elucidate the unique contributions of each type of expert and to identify distinct review patter
Long abstract
In practice, the participation of experts from diverse backgrounds in research assessment is increasingly prevalent. This trend is particularly evident in demand-oriented and large-invested science and technology (S&T) project reviews. In such contexts, experts from government management departments, market users, and other relevant sectors are often invited to join academic experts in forming a comprehensive review panel. The review process typically involves both quantitative scoring and qualitative evaluation, with experts providing detailed opinions to support their assessments.
Given this backdrop, it is essential to understand how experts from different backgrounds contribute uniquely to the research assessment process. Specifically, we aim to explore the distinct roles played by government management experts, user experts, and academic experts in project reviews. We are particularly interested in identifying any consistencies and differences among these groups in terms of their quantitative and qualitative evaluation opinions.To address these questions, we conducted a detailed statistical analysis and comparative study of the review behaviors and outcomes of nearly 300 experts who participated in the review of over 20 major S&T projects in 2024. Our analysis seeks to uncover the specific contributions of each type of expert and to identify patterns in their review behaviors and results.The findings of this study will provide valuable insights into the distinct roles and behaviors of experts from different backgrounds in research assessment. Moreover, our results will offer evidence-based recommendations for optimizing the use of expert input in future research assessments, thereby enhancing the effectiveness and fairness of the evaluation process.
Accepted Paper
Short abstract
Long abstract
In practice, the participation of experts from diverse backgrounds in research assessment is increasingly prevalent. This trend is particularly evident in demand-oriented and large-invested science and technology (S&T) project reviews. In such contexts, experts from government management departments, market users, and other relevant sectors are often invited to join academic experts in forming a comprehensive review panel. The review process typically involves both quantitative scoring and qualitative evaluation, with experts providing detailed opinions to support their assessments.
Given this backdrop, it is essential to understand how experts from different backgrounds contribute uniquely to the research assessment process. Specifically, we aim to explore the distinct roles played by government management experts, user experts, and academic experts in project reviews. We are particularly interested in identifying any consistencies and differences among these groups in terms of their quantitative and qualitative evaluation opinions.To address these questions, we conducted a detailed statistical analysis and comparative study of the review behaviors and outcomes of nearly 300 experts who participated in the review of over 20 major S&T projects in 2024. Our analysis seeks to uncover the specific contributions of each type of expert and to identify patterns in their review behaviors and results.The findings of this study will provide valuable insights into the distinct roles and behaviors of experts from different backgrounds in research assessment. Moreover, our results will offer evidence-based recommendations for optimizing the use of expert input in future research assessments, thereby enhancing the effectiveness and fairness of the evaluation process.
Peer review: pressures and possibilities
Session 1 Tuesday 1 July, 2025, -