National frameworks for assessing research are evolving to incorporate new data-driven and qualitative methods. As the purposes and politics of assessment grow ever more complex, join leading players from the UK, Australia, Italy, New Zealand and Sweden to consider what next?
Long Abstract
Data science and innovations in AI have the potential to unlock fresh perspectives on national research systems. In this session, we pose the question: how might these capabilities transform research assessment - and what are the risks we must navigate?
In the global landscape of research assessment, we may be seeing the beginnings of a shift away from sporadic ’exercises’ conducted at intervals of several years, towards emerging possibilities for ‘real-time research assessment’. However, the prospect of real-time research assessment entails hazards, challenges, and a need to establish what 'good practice' might look like.
The possibility of metrics-driven research assessment has been mooted on many occasions before now, but a key difficulty has always been that while such systems are seen as less burdensome compared with peer review approaches, the use of metrics is often highly problematic and may run counter to principles of responsible research assessment, as expressed by initiatives like DORA and CoARA, the Leiden Manifesto, Barcelona Declaration and others.
This panel discussion will consider the policy advantages and dilemmas of using advancements in data science and machine learning to produce novel insights into national research systems. We begin with recent findings from RoRI's AGORRA project, which compares research assessment reforms across national systems, followed by experiences and insights from Australia, New Zealand, Sweden, and the UK.
Research assessment is an essential part of the scientific system. We will present ongoing RoRI research which explores the past, present, and future of research assessment systems globally, including a new typology of national assessment systems and a survey of Global Research Council funders.
Long abstract
There have been many proposed changes to the way we assess research, with an increased focus on so-called 'responsible research assessment' (RRA).
RoRI's AGORRA project is to our knowledge, the first large-scale project comparing developments in assessment reform around the world.
In this presentation we will briefly present results of recent and ongoing AGORRA workstreams, including: a new typology for comparing national assessment systems, a longitudinal comparison of changes to systems of 13 countries around the world, and a recent survey of Global Research Council participants invited funders from across the globe to answer questions about how they assess research.
Our headline finding is that although responsible research assessment marks a potential paradigm shift away from purely disciplinary and excellence-based research performance paradigms, most national assessment systems and funding agencies are still strongly focused on conventional and long-established modes of evaluating scholarly quality.
Despite this continuity, principles of RRA are gaining increasing importance. But rather than a whole-scale paradigm shift, we are witnessing 'layering' of RRA ideas and practices onto existing assessment systems.
Critically, we also find there is no single or uniform RRA ‘pathway’: different funders and assessment systems pursue RRA in multiple ways, to varying degrees, and with different emphases.
The VQR has evolved towards a balanced integration of peer review, citation metrics, and AI tools. This study examines key methodological shifts, highlighting recent reforms and the role of AI in supporting research evaluation while ensuring a controlled and transparent use of quantitative metrics.
Long abstract
The Italian Research Quality Assessment (VQR) has played a pivotal role in shaping institutional research strategies and national science policies. Over the years, the methodology and criteria underpinning the VQR have undergone significant modifications, aligning with evolving international research evaluation frameworks. The VQR 2020-2024 introduces major reforms, including the expansion of eligible research outputs and the abandonment of the bibliometric algorithm previously used for the assessment of scientific publications.
This paper examines the evolving evaluation methodologies employed across the four iterations of the VQR conducted to date. Specifically, it highlights a gradual transition from a semi-automated reliance on citation metrics towards an approach where metrics and AI tools serve as complementary instruments to support expert peer review. The study’s findings contribute to the ongoing debate on research evaluation methodologies, reinforcing the argument that metrics should not be entirely discarded but rather used with caution and in a well-balanced manner, particularly in large-scale assessments such as Italy’s periodic research quality evaluation.
In April 2024, the New Zealand government established an advisory group to review all aspects of the university system in New Zealand including policy settings, funding, governance, and academic governance. It was paralleled by a review of the research and innovation system with the same chair for each review and observers from each panel engaged on the other to ensure a comprehensive review of the entire knowledge production system. As part of the review , the future of the Performance Based Research Fund (PBRF) was considered in depth. This comprises about 16% of the Universities’ funding and about 30% of the Crown’s funding of the system. It had a structure very similar in principle to the UKs REF with core funding being calculated from research degree completions, research grant and contract income (allowing that in NZ government research is full cost funded, and an extensive and expensive assessment process of every academic’s portfolio review every 6-7 years. The review team considered at length the purposive of the PBRF in the NZ context and engaged in much international consultation, recognising that since its introduction much has evolved. It focused its recommendations on what would be more effective moving ahead with a more responsive process that meant that it was more forward looking and that its purpose is clear, namely to promote research intensity within the eight universities . I will present the considerations from the review and assuming the Crown has announced its response to the review, explain the recommended revisions.
Sweden will introduce a new research indicator for PBRF in 2028, rewarding HEIs for strategic recruitment of young researchers in key scientific fields. This aims to promote cost-effective, useful research assessment and supports ongoing discussions on responsible research assessment.
Long abstract
The Swedish system for assessing and promoting research quality comprises several components. A wide and flexible set of indicators on research funding, research personnel and scientific publication are collected on a yearly basis by the Swedish Research Council. A report - The Swedish Research Barometer: Swedish research in an international perspective – which is published every second year cover the same three areas and gives an overall description of the state and progress of the Swedish research system. The report highlights both how Sweden compares internationally as a research nation, and how the Swedish higher education sector has developed over time. It provides the basis for policy discussions and informed decisions on research funding.
In addition, tailor-made evaluations on research quality are carried out by HEIs according to the specific needs of each HEI. The evaluations correspond to a national framework developed by the Association of Swedish HEIs setting up criteria for quality assurance and quality enhancement thereby strengthening the HEIs’ ownership and autonomy regarding research quality. Institutional reviews on a national level are conducted by the Swedish Higher Education Authority monitoring the HEIs compliance with the framework. In addition, national evaluations on research quality are carried out by the Swedish Research Council providing a national overview of the quality and societal impact of research in a specific research field.
Kristina Tegler Jerselius (Swedish Research Council)
Short Abstract
National frameworks for assessing research are evolving to incorporate new data-driven and qualitative methods. As the purposes and politics of assessment grow ever more complex, join leading players from the UK, Australia, Italy, New Zealand and Sweden to consider what next?
Long Abstract
Data science and innovations in AI have the potential to unlock fresh perspectives on national research systems. In this session, we pose the question: how might these capabilities transform research assessment - and what are the risks we must navigate?
In the global landscape of research assessment, we may be seeing the beginnings of a shift away from sporadic ’exercises’ conducted at intervals of several years, towards emerging possibilities for ‘real-time research assessment’. However, the prospect of real-time research assessment entails hazards, challenges, and a need to establish what 'good practice' might look like.
The possibility of metrics-driven research assessment has been mooted on many occasions before now, but a key difficulty has always been that while such systems are seen as less burdensome compared with peer review approaches, the use of metrics is often highly problematic and may run counter to principles of responsible research assessment, as expressed by initiatives like DORA and CoARA, the Leiden Manifesto, Barcelona Declaration and others.
This panel discussion will consider the policy advantages and dilemmas of using advancements in data science and machine learning to produce novel insights into national research systems. We begin with recent findings from RoRI's AGORRA project, which compares research assessment reforms across national systems, followed by experiences and insights from Australia, New Zealand, Sweden, and the UK.
Accepted papers
Session 1 Monday 30 June, 2025, -