Author:J. Hartstein (German Centre for Higher Education Research and Science Studies)
Paper short abstract:
The use of quantitative performance indicators is heavily disputed in the scientific community. However, the circumstances of computation of those indicators also deserve a further look. Actors and infrastructures currently await further investigation.
Paper long abstract:
Approaches of Governance and quality assurance in the science system consist of different instruments - of which quantitative research evaluation is among the prominent ones. A specific form of science evaluation is the computation of performance indicators, which may be used to inform funding decisions or determine the availability of career opportunities.
While peer review is widely undisputed as mechanism of quality assurance in science (Wilsdon et al. 2016), data driven approaches of evaluation - for example the computation of the h-index, the Journal Impact Factor, the counting of patents or Nobel prizes - are heavily discussed in the scientific community (Hicks et al. (2015), Larivière/Gingras (2010), Werner (2015)). Performance indicators are often perceived as powerful and inescapable, which is why researchers supposedly adapt their behavior to the evaluation criteria (Beel et al. (2009), Butler (2004), Franck (1999)), which leads to intended and unindended effects.
The investigation of quantitative science evaluation practices carries questions of comparison and quantification, recognition and control. We are aiming at understanding the computation of performance indicators as 'algorithmic techniques' (see Rieder (2017)), which are contributing to the internal and external regulation of the science system.
We seek to zoom into the computation of those indicators and the underlying data infrastructures and investigate how they come into the world. We hed light on relevant actors and power relations, and thus contribute to a political science of algorithms.
Technologies that count: big data and social order