Log in to star items.
- Convenor:
-
Wolfgang Kaltenbrunner
(Leiden University)
Send message to Convenor
- Format:
- Traditional Open Panel
Short Abstract
We bring together contributions from Science Studies research that interrogate the tacit, infrastructural, and socio-economic dimensions of governing science through reform, teasing out the politics of replication, research information infrastructures & classificatory and inference systems.
Description
STS has long shown that research is messy work, contingent, and often far removed from its orderly presentation in textbooks and publications, continually exceeding the formal structures of disciplines, methods, and procedures. At the same time, and partly spurred by the constant demand for increased productivity and growth, scientific work is constantly subject to efforts at ordering, which seek to regulate practice, stabilize knowledge objects, and abstract research into informational formats that can be used for monitoring and governing scientific activity.
Specifically, the more recent wave of reforms, ranging from open science and research integrity policies to replication drives and evaluation reform, promises a great deal in the direction of strengthening the reliability, reproducibility, and transparency of academic research. Increasingly such reforms are mediated and enacted via complex infrastructures and stringent procedural regimes, which include e.g. dedicated funding lines, pre-registration and preprint platforms, new reporting standards, metrics and systems of metadata, all of which, separately and in combinations, do much to reveal aspects about how governance operates on the ground, while obscuring others.
This panel examines the politics of such ordering. It brings together contributions that analyse how specific policies and infrastructures reorganize scientific work, raising questions about what and whose ideas of order are pursued and what concrete effects these ordering attempts give rise to. The focus is on three sites. It revisits replication as a routine practice within laboratory settings and in misconduct investigations, where it probes of how research was conducted and to allocate responsibility. It examines research information infrastructures and their role in organizing institutional dependencies and workflows, while aligning research activity with regimes of productivity and performance. It also analyses classificatory and inference systems that shape representations of science, structuring visibility and recognition across different actors and domains.
Accepted papers
Session 1Paper short abstract
I discuss observations from my PhD research into practices of investigating misconduct in Japan, and specifically how replication of contested claims is sometimes used as a means to tease out and render available to investigators tacit dimensions of research management and work organization
Paper long abstract
While STS research has shown that replication experiments can surface tacit knowledge and locally sustain interpretative flexibility of experimental design and results, this paper examines their role in a different institutional setting: the investigation of research misconduct. Such investigations are formally structured by regulations, yet in practice leave considerable discretion to appointed investigators in allocating budgets and selecting, executing, and interpreting investigative methods. I focus on what happens when investigators commission the replication of contested research by the very authors under scrutiny.
Drawing on three controversies in the biosciences in Japan (2006–2014), I show that monitored attempts at replication enable investigators to reconstruct the organization of research in labs facing allegations, thus making tangible problems of authorship, hierarchy, and unevenly distributed responsibilities and risks that are otherwise difficult to access and articulate. In two cases, replication appears to have proven more useful as a probe of how experimental work had been conducted and coordinated, and not so much as a means to resolve doubts. A third case, in which investigators initially sought to avoid replication altogether and faced significant pushback on that, demonstrates the extent to which replications are expected to play a role in such inquiries in Japan, and how its absence itself became a point of contention.
Building on Sigl’s notion of tacit governance, I argue that misconduct investigators seriously engage tacit elements that surface during monitored replications: these provide invaluable narrative resources for transforming “case descriptions” into ordered judgments about responsibility, laboratory organization, and research culture.
Paper short abstract
The paper presents the most prevalent lock-in stories circulating among Dutch universities in relation to the Open Research Information (ORI) movement. It highlights the inherent messiness, fuzziness, and overlapping nature of technological, institutional, and behavioral lock-ins.
Paper long abstract
This article investigates how Dutch universities perceive lock-in mechanisms related to the Open Research Information (ORI) movement.
Research information plays a vital role within the research landscape. For instance, it aids scholars in discovering relevant literature, supports institutions monitoring their research activities, and it also informs university policymakers in shaping their future strategies. They are also often packaged into metrics and analytics used to rank or compare researchers across departments, universities, and research centres, particularly with respect to productivity and performance.
Despite its various importance, research information is often locked behind proprietary databases. This has become a major driver for the ORI movement, which advocates switching to a more open, community-led research information system, such as OpenAlex and OpenAIRE. But, despite the growing support for the movement, many Dutch universities still rely on proprietary platforms. From 19 interviews across 14 institutions in the Netherlands, three prevalent lock-in stories are identified. First, proprietary platforms play various roles within the research landscape. Various actors within a university with varying levels of awareness and interest in the ORI movement have integrated these tools into their workflows, making it highly difficult to switch. Second, the continuous acquisition of community-driven initiatives by proprietary companies has made universities more hesitant to formally invest in emerging alternatives, slowing the development of ORI platforms. Third, the persistence of the ‘publish or perish’ culture shapes how researchers engage with the ORI movement, as career advancement often remains closely tied to publication metrics and established systems.
Paper short abstract
This paper examines how research infrastructures produce asymmetric and discriminatory representation through SDG classification in bibliometric databases and name-based demographic inference tools such as NamSor.
Paper long abstract
This paper examines algorithmic representational asymmetry in research infrastructure through two linked case studies: SDG-related classification in major bibliometric databases and name-based demographic inference tools used in research policy. The first study analyzes how SDG classifications in Web of Science, Scopus, and OpenAlex represent sustainability research, showing that these systems systematically privilege Global North institutions, economic superpowers, and topics aligned with dominant publication patterns, while overlooking marginalized groups, poorer countries, and other SDG-relevant populations. The second study assesses NamSor, a tool used to infer gender, country of origin, and ethnicity from names, and shows both its discriminatory limitations and its selective usefulness in policy contexts. Together, the two cases reveal a common problem: classification tools presented as neutral infrastructure can encode uneven visibility, misrecognition, and structural bias. Rather than simply measuring existing inequalities, these systems help produce them by shaping which researchers, populations, and themes become legible in data-driven research governance. By bringing bibliometric classification and onomastic inference into the same analytical frame, the paper contributes to STS debates on infrastructure, classification, epistemic injustice, and the politics of algorithmic representation/visibility. It argues that representational asymmetry should be understood as a central feature of contemporary research infrastructures, with important consequences for research evaluation, diversity monitoring, and science policy.
Paper short abstract
Based on an ethnographic study of six European cognitive science, neuroimaging and psychology labs, I provide an emic perspective of replication. Ultimately, I relate the researchers' meanings, negotiations, reflections and experiences to the sociology of scientific knowledge on replication.
Paper long abstract
Since claims about a “replication” or “reproducibility crisis” started to emerge in the early 2010s, there has been a heightened focus on the supposed role of replication as a practice related to the credibility and assumed self-correction capacity of science. Metascientists, Open Science proponents, research integrity scholars, and other actors involved in current science reform endeavours often depict replication as a diagnostic tool, enabled by explication in transparent reporting and the sharing of material, that aids in identifying whether a finding is credible or a fluke, the result of questionable research practices or misconduct.
However, the social studies of science have a long history of investigating the socio-epistemic nature of replication. The sociology of scientific knowledge outlines the uncertainty surrounding replication and its confrontation in enactment and appraisal with the experimenters’ regress as well as the dependence on tacit knowledge and social negotiation.
Similar to classical findings by Mulkay & Gilbert (1986), following the field observations as part of a multisite lab ethnography, during the interview reflections researchers state they employ “partial replications” in which they take parts of a previous study and incorporate it into their own research. They state that direct or exact replication is not worth the effort while “partial replication” allows them to partially build on previous research, while still exploring something novel and following their own interests. I relate their reflections to the sociology of scientific knowledge, discuss what implications they have for current reform movements and might contribute to ongoing replication discourse and reform.