Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Marta Sienkiewicz
(Leiden University)
Tjitske Holtrop (CWTS, Leiden University)
Thed van Leeuwen (Leiden University)
Send message to Convenors
- Format:
- Combined Format Open Panel
- Location:
- NU-4A25
- Sessions:
- Friday 19 July, -, -, -
Time zone: Europe/Amsterdam
Short Abstract:
This panel seeks to combine academic presentations with an interactive workshop to explore questions around new notions of research quality; their translations into, and reconfigurations of, evaluative practices and standards; and the research practices they make visible and valuable.
Long Abstract:
Research quality assessments influence (symbolic) hierarchies, distribution of resources and career trajectories, with implications for whose and which research can be done. Various reform movements and science policy interventions have been adding new elements to conceptions of quality, and to (e)valuative practices more specifically. New notions of quality such as openness, interdisciplinarity, integrity and societal impact have trickled into research assessments as criteria against which people and ideas are evaluated, into tools aimed to capture them, and as concerns requiring new expertise and tactics from assessors. They change (e)valuative practices and, in turn, the meaning of ‘good research(ers)’.
This panel contains two parts. First, we invite paper contributions bringing an STS lens to the study of quality, specifically of new quality notions, the reform movements that support them and the evaluative situations where they count. How are practices of (e)valuation and (e)valuative decision making changing? What mechanisms (e.g. judgement devices, infrastructures, expertise) do they require, how are these developed and deployed? Do new notions of quality allow for the accumulation of ‘epistemic capital’ (Fochler, 2016; Rushforth et al., 2019) with which to build viable careers?
Furthermore, the panel includes a separate workshop session exploring the notions of quality discussed in the academic papers, engaging both presenters and audience interactively. The workshop will consider what the academic arrangements around new notions of quality in- or exclude (incl. as they institutionalise and potentially standardise) and what evaluative purposes they serve, with a reflection on what it takes to do research accountability well.
The panel aims to enrich our empirical and conceptual understandings and frameworks for studying (research) quality, as well as generate reflections of theoretical and practical significance for STS themes of (e)valuation, standardisation and justification in situated practices of research assessment and evaluative decision making.
Accepted papers:
Session 1 Friday 19 July, 2024, -Paper short abstract:
This contribution is built on a case study investigating SSH researchers valuing their own research and publication output in everyday practice. Drawing on valuation studies, the interplay of different quality notions is analyzed and discussed in relation to the accumulation of epistemic capital.
Paper long abstract:
Drawing on STS research conceptualizing valuation as a situated practice (Heuts & Mol, 2013; Dussauge et al., 2015; Helgesson, 2016; Waibel et al., 2021) this contribution asks how SSH researchers are valuing the quality of their own work along multiple dimensions, e.g. in relation to the multiplicity of academic careers (Gläser & Laudel, 2015; Laudel & Bielick, 2018, 2019). Central questions of the panel are addressed from the perspective of researchers making sense of their own research and its outcomes in everyday practice.
Presenting results from a case study based on 47 interviews with researchers in history, political science and area studies, I will discuss how SSH researchers (e)valuate different research and publication practices, focusing on the interplay of epistemic values, reputation building and addressing institutional expectations, needs and requirements. Bringing afore a plethora of valuation practices and associated quality notions, I will pay special attention to the use of new notions of research quality, e.g. openness, interdisciplinarity, or societal impact, and their relation to traditional conceptualizations of quality in SSH research.
Consequently, these valuing practices are discussed in relation to accumulating epistemic capital (Fochler, 2016; Rushforth et al., 2019). Such accumulation processes are analyzed as two-folded in form: epistemic practice is characterized by a multiplicity of contingent qualities, while valorization is characterized by abstract, quantitative and universal forms of quality. While certain configurations of quality notions can enable or hinder the accumulation process, it depends on both dimensions, even though the abstract is re-shaping and transforming the concrete dimension.
Paper short abstract:
We analyze how researchers engage in practices of valuation as they apply for ERC Starting and Consolidator Grants. Drawing upon interviews with applicants, we present narratives of why researchers apply to the ERC and the importance they attribute to the ERC within European science.
Paper long abstract:
Practices of applying for research funding are omnipresent and highly important processes in academic work. Yet so far, there have been no systematic studies that analyze how researchers engage in these practices and how these practices impact their self-understanding and the orientation of their work. This constitutes a both significant gap in research in the social studies of science and in reliable knowledge for science policy making. In this paper, we convey two themes of insight emerging from a project that aims to address this gap by studying how researchers engage in practices of valuation while applying for European Research Council (ERC) grants. Drawing upon interviews with applicants for ERC Starting and Consolidator Grants across multiple German universities, we explore two themes related to high quality in research. First, we present why researchers apply to the ERC: we offer grounded-theory-derived insights about applicants’ motivations to pursue a favorable recognition by this evaluation process. Secondly, we explore the importance that researchers attribute to the ERC within European science: in what ways does the ERC itself represent high-quality science within Europe, to them? Within the larger project that offers these results, we aim to reconstruct the socio-epistemic practices by which researchers construct value in their applications, as well as the relationship between these practices and the emergent normative infrastructures that surround and advise them.
Paper short abstract:
I present a case studies that highlights the role of dissent in internal quality control of science to show that it can be characterized as a social form of quality control in science and is brought to live in several “institutional” forms and that dissent is influenced by non-epistemic values.
Paper long abstract:
In recent years, dissent has been mainly discussed as a driver of scientific progress, but less as a form of self-control in science. In this paper, I focus on dissent to understand similarities and tensions between “social” and “institutional” forms of quality control. Besides the peer review process there are various other mechanisms of self-regulation, such as scientific conferences and discussions, as well as the social organization of dissent in general. An traditional example of this idea can be found in Merton's (1972) norms, where "organized skepticism" plays a central role for preserving scientific quality. However, despite being referenced at various points in the scientific literature these different control mechanisms, their roles, points of intervention, and actors are often only marginally discussed.
To identify and analyze the self-control functions of scientific dissent, including the different roles of scientists and the limits of quality control, this study examines the scientific progress and concept of quality through a case study on the HIV-Aids-Debate around Peter Duesberg. I will argue that dissent can be characterized as a “social” form of quality control, which is implemented in and accompanied by several “institutional” forms, which may create tensions between these two. Moreover, I show that scientific dissent and discourses are often influenced by non-epistemic values and internal power-structures of the scientific community, which can limit but also significantly strengthen the self-control abilities of science. With this I want to clarify under which exact conditions diversity can be a driver for better quality control.
Paper short abstract:
Against the background of changes in knowledge production and dissemination during the pandemic, we elaborate on how biomedical researchers (e)valuated the quality of this knowledge and discuss which notions of quality emerge in this process.
Paper long abstract:
The question of the quality of scientific knowledge on COVID-19 became particularly relevant for biomedical researchers during the pandemic. In light of pandemic-related changes in the scientific enterprise - above all the high speed and number of new publications, the partial suspension of peer review, the increasing importance of preprint, and the influx of new researchers from other disciplines entering into the field - the reliability of new knowledge on COVID-19 become increasingly questioned by researchers, especially in the acute phase at the beginning of the pandemic. In this paper, we focus on how researchers questioned the quality of this knowledge and which ideas of quality emerged in these evaluation processes.
Drawing on semi-structured interviews with biomedical researchers from the fields of virology, epidemiology, hygiene and other areas of infection research, we elaborate on how researchers (e)valuated scientific knowledge about COVID-19 during the pandemic and how they dealt with uncertain knowledge. Our paper shows how researchers performed both processes of de- and re-stabilization of knowledge on COVID-19. Here, the pandemic-induced changes in the production and dissemination of knowledge led to significant disruptions in the common routines of evaluating new knowledge, whereby researchers questioned its reliability for their scientific practice. In response, researchers developed individual, collective, and institutional practices that attempted to secure knowledge. In doing so, we show which quality criteria become significant in this re-evaluation process of knowledge claims and discuss their relevance for scientific research within the pandemic context (and beyond).
Paper short abstract:
The study explores an intervention aiming to enact ‘responsible use of metrics’ in peer review of potential nominations for high-stakes scientific prizes. It focuses on how the removal of some quantitative indicators influences valuation practices of the assessment committee.
Paper long abstract:
Initiatives to reform research assessment have for at least a decade strived to reposition how academics are evaluated. The resulting interventions in evaluative practices influence what counts as quality in academia.
A particularly pertinent facet of the reforms concerns promoting responsible use of metrics or decoupling research assessments from quantitative indicators altogether. One of the objectives is to shift understandings of quality from one that is quantitatively proxied (e.g. through number of publications or citations) to one rooted in broader values or more inherent in the content of the work. The resulting notion of quality is therefore often aimed to be de-quantified.
In this paper, I explore the results of an intervention aiming to enact ‘responsible use of metrics’. The case study concerns an internal assessment at a Dutch university in which candidates for high-stakes national prizes are evaluated to decide on the university's nominees. Some quantitative indicators were removed from that process due to concerns around responsible evaluation.
The case offers insight into how the removal of systematically provided indicators influences valuation practices of the assessment committee. How do peer reviewers adapt to this intervention and through which strategies do they construct quality and reach their evaluative decisions?
Theoretically the case explores implications of a shift away from metrics for calculation (Callon and Muniesa, 2005) and qualculation (Cochoy, 2002; Callon and Law, 2005) as processes of evaluative decision making. It broadens our understanding of how calc/qualculative rationality works after intervention in calculative agencies in a research assessment reform context.
Paper short abstract:
A survey of 485 life scientists engaged in research assessment finds gaps between the importance placed on assessing the credibility of research outputs and satisfaction with their ability to do so. These results suggest opportunities to develop better indicators of credibility and research quality.
Paper long abstract:
Researchers serving on grant review and hiring committees make high-stakes decisions about the quality of candidates’ research. Under conditions of limited time and attention, they may resort to journal-based impact metrics or reputational judgments as extrinsic proxies for research quality. These evaluative practices have been criticized by reform initiatives like the Coalition for Advancing Research Assessment (CoARA) because, inter alia, they do not address intrinsic characteristics of the evaluated research. Building on a previous interview study and on STS work on indicators, we surveyed 485 biology researchers who recently served on a research assessment committee to better understand their practices and priorities in the committee context. We found that assessing credibility or trustworthiness is very important to most (81%) respondents’ evaluations in this setting, although fewer than half of respondents were satisfied with their ability to assess credibility. We found similar gaps for specific evaluative tasks, particularly around the assessment of research integrity and of transparency in reporting. While a substantial proportion (57%) of respondents acknowledged using Journal Impact Factor and journal reputation to assess credibility, our results suggest opportunities to develop better indicators or signals to support the evaluation of credibility and, by extension, research quality. Even as such signals are proxies in their own right, we argue that they can usefully supplement personal inspection in a less distortionary way than journal-based measures and thereby align with notions of research quality as an attribute of the work itself, rather than its container within the published scholarly record.
Paper short abstract:
How does the organisation of interdisciplinary evaluation panels influence the evaluation of specific disciplines in the social sciences? Our study of the European Research Council highlights potential conflicts and complementarities between epistemic styles and evaluation cultures.
Paper long abstract:
The European Research Council (ERC) is one of the most important research funding organisations in Europe. Every year, reviewing panels composed of researchers from various research fields gather together to decide which research projects should receive funding. ERC’s evaluation panels are largely interdisciplinary, and decisions to select a proposal are rarely made by panelists who belong to the same discipline as the proposal being evaluated. How does the organisation of interdisciplinary evaluation panels influence the evaluation of specific disciplines and what are the potential epistemic consequences of interdisciplinary panels in the social sciences? To answer this question, we draw on interviews conducted with ERC panelists, combined with the analysis of the composition of social science panels since the creation of the ERC in 2007. We first show that the composition of evaluation panels has largely changed over time to accommodate disciplinary differences, leading to the inclusion, regrouping and exclusion of specific disciplines. We then identify three types of tensions and compatibilities between disciplines, impacting the evaluation of different research quality criteria, and the significance attributed to specific disciplines within the panels. Ultimately, our study of the organisation of ERC evaluation panels highlights potential conflicts and complementarities between epistemic styles and evaluation cultures in the social sciences. It invites researchers and reviewers to be more mindful of these differences and proximities in order to promote more careful and respectful evaluation practices.
Paper short abstract:
In a consortium of three non-university research institutes in the field of digital transformation research, we are developing a process evaluation for interdisciplinary research. We want to discuss the methodological path, the results and possible implementations in everyday scientific life.
Paper long abstract:
Interdisciplinary research has been called for and promoted for years (Wissenschaftsrat 2020). It is intended to contribute to solving grand challenges, such as digital transformation, through interdisciplinary collaboration (Klein 2013). Although interdisciplinary research can undeniably lead to highly innovative results (Leahey et al., 2017), the incentives and motivation for interdisciplinary research are met with strongly disciplinary evaluation standards (Blakeney et al., 2021).
The efforts of particularly skilled interdisciplinary researchers have not yet been sufficiently visible. There is further a lack of systematic evaluation of IDR to promote learning processes and support interdisciplinary career paths. Against this background and in view of the still young history and methodological challenges of research on digital transformation, we propose an evaluation tool for IDR. It places particular emphasis on the process of collaboration and takes into account the perspectives of researchers at different career stages. We measure the quality of interdisciplinary teamwork in various dimensions such as self-efficacy, communication and meta-cognition.
We draw on the literature on interdisciplinary research and team science (Blakeney u. a. 2021; Hoegl und Gemuenden 2001). Additionally, we conducted three workshops with 25 researchers from different disciplines at all career levels. From these, we identified important scales for measuring the quality of interdisciplinary collaboration through qualitative content analysis (Kuckartz 2012; Mayring 2015).
We plan to anchor these scales in our interdisciplinary institutes as a means of self-learning. Together with the panel, we would like to reflect on the implication in relation to power shifts and organizing research.
Paper short abstract:
Our contribution relates researchers' quality conceptions with their (e-)valuation of citation- and web-based metrics. We then contrast these views with an expert discourse on Altmetrics that hopes to overcome problems in research evaluation by expanding the range of research metrics.
Paper long abstract:
Recent years have witnessed a revival of the old question what research quality is and how best to assess it. A particular concern revolves around numerical indicators and what they (should) measure. A repeated suggestion is to tie indicators to what the assessed perceive as quality. These debates have been propelled by the occurrence of new metrics and their intermingling with novel value concepts. One example is the propagation of Altmetrics and their conjunction with notions of openness and societal impact.
Our contribution firstly relates researchers’ understandings of scientific quality to their (e-)valuation of metric indicators. Secondly, these appraisals are juxtaposed to experts’ visions of evaluation reform through web-infrastructures and novel online data. We conducted 25 semi-structured interviews with German-affiliated researchers from Genetics and Psychology. Respondents reflected on how they assessed research value, on the quality of one of their journal articles, and on the meaning of metrics attached to this paper (citations, Altmetrics, JIF). These perspectives are contrasted with 14 expert interviews about Altmetrics’ perils and potentials.
Both researchers and experts criticize reactive effects of existing indicators. Researchers value indicators that reflect scientific chains of production. Accordingly, they judge few Altmetrics as valuable, most as irrelevant, and some as potentially emulating established indicators’ shortcomings. Despite Altmetrics limitations, experts value concomitant datafication-infrastructuration processes as potentially overcoming problems in research assessment: Exploring new kinds of impact, aligning evaluation with newly crafted goals, incentivizing altered behaviors: Altmetrics exemplify a current deterritorialization of research and its reterritorialization along the lines of platform capitalism.
Paper short abstract:
This paper uses the trials and tribulations of a project designing an evaluative frame for a more balanced academic recognition and reward at 3 Dutch universities to reflect on the ambitions and practices of Dutch research evaluation reform and its internal tensions around diversity and inclusion.
Paper long abstract:
In the past decade movements to reform research evaluation have been growing internationally, concerned about misapplications of narrow performance criteria at the expense of other qualities or priorities such as open science, team science, diversity and inclusion, societal relevance, mission-oriented and transdisciplinary research, or citizen science. These debates have brought to the fore the diversity of ambitions, actors and activities coming together in academic work, and the struggle to be inclusive of these in meaningful and effective criteria and frameworks of academic quality and relevance. Between 2021 and 2023 the authors were involved in a project that aimed to facilitate strategic and evaluative decision-making in Dutch academia taking a more balanced approach to the recognition and reward of academics. The purpose was to provide an evaluative framework that was sensitive to a range of diverse and often previously invisible activities specific to different disciplinary contexts. While the project ended prematurely and didn’t deliver on its ambitions, we learned a lot, both from the qualitative research that we did with six different disciplinary teams and from our internal struggles to bring together our ambitions and data. In this paper we will use these lessons to reflect on the ambitions and practices of research evaluation reform in the Netherlands and its internal tensions around diversity and inclusion.