Log in to star items.
- Convenor:
-
Carolin Thiem
(VDIVDE-IT)
Send message to Convenor
- Format:
- Traditional Open Panel
Short Abstract
This panel explores how emerging funding formats—lotteries, prizes, participatory and hybrid schemes—reshape the sociotechnical imaginaries of fairness, excellence, and innovation in research policy across Europe.
Description
Across Europe, the landscape of research funding is changing. Policymakers, funders, and scientific communities are experimenting with new formats—from randomized and hybrid selection procedures to challenge prizes, mission-oriented calls, and participatory evaluation schemes. These initiatives respond to critiques of traditional peer review and the perceived crisis of meritocracy, efficiency, and trust in science funding. Therefore, as AI systems increasingly support or automate decisions about grant approvals or project selections, decision-making may become less transparent. Criteria once shaped by human judgment could be replaced by algorithmic models whose logic is difficult to interpret. When algorithms take on central roles, new power structures and dependencies may emerge — for instance, favoring certain types of proposals or topics that reflect biases in training data. This raises challenges related to transparency, legitimacy, and human accountability in funding decisions. Yet they also enact new sociotechnical imaginaries of what counts as good research, fair allocation, and responsible innovation.
This panel invites contributions that investigate the future of research funding through the analytical and conceptual tools of Science and Technology Studies (STS). We ask:
• What imaginaries of excellence, fairness, or societal impact underpin emerging funding models?
• How might the increasing use of AI in funding agencies transform decision-making processes in areas such as grant approval and project selection?
• In what ways do national and European funding agencies differ in their experimentation with such formats—and what can be learned from cross-country comparisons?
• How can STS contribute to designing more reflexive, democratic, and resilient funding systems?
We particularly welcome empirical studies of funding organizations, comparative policy analyses, and conceptual reflections on the governance of uncertainty and innovation in funding. The panel seeks to bridge academic research and policy advice by critically examining the politics of fostering the future: how funding practices not only support science but also shape collective visions of desirable futures.
Accepted papers
Session 1Paper short abstract
This article examines China's "Talent Recruitment and Champion Designation Mechanism" model, which prioritizes talent capabilities and increases opportunities for young people. However, it still faces challenges in balancing efficiency and fairness.
Paper long abstract
How to maximize the efficiency of limited resources while ensuring distributive fairness is a critical issue for research funding systems worldwide. The Talent Recruitment and Champion Designation Mechanism is a novel model that has emerged in China's scientific research funding sector in recent years. In this model, the government establishes a platform, enterprises propose scientific or technological demands and provide funding, and research teams are recruited nationally or even globally. This paper analyzes this novel project funding model to explore how to fully leverage the effectiveness of new funding formats and address equity issues during project implementation. This model prioritizes candidates' capabilities in talent selection, breaking down barriers related to professional titles and seniority, thereby broadening the channels for young people to obtain projects. At the same time, full delegation of authority is carried out to stimulate the vitality of the research team. The overall leader is granted considerable autonomy in matters such as team formation, technical route decision-making, and fund allocation. However, this model encounters difficulties and obstacles during implementation. First, there is the challenge of accurately determining whether the solution provider possesses sufficient capability. Second, it is unclear whether the outcomes of such projects meet the current talent evaluation criteria of universities and research institutes. Third, projects of this model are usually more challenging and demanding, and thus require how to provide long-term incentives. Therefore, addressing these shortcomings to more effectively leverage the potential of this novel funding model constitutes a key focus for further research.
Paper short abstract
What kinds of sociotechnical futures emerge when algorithms become central actors in the evaluation of funding applications and the monitoring of compliance? Taking a practice-theoretical perspective, this paper approaches research funding not as a neutral allocation mechanism.
Paper long abstract
What kinds of sociotechnical futures emerge when algorithms become central actors in the evaluation of funding applications and the monitoring of compliance? Taking a practice-theoretical perspective, this paper approaches research funding not as a neutral allocation mechanism but as a set of situated practices in which “fair” decisions, responsibilities, and values are continuously enacted. Across Europe, funding agencies increasingly deploy algorithmic systems to support or automate assessment, ranking, and oversight processes. These systems do more than optimize workflows: they reconfigure everyday evaluative practices, redistribute epistemic authority, and reshape relations between applicants, reviewers, administrators, and technical infrastructures. The talk will address the questions: Do algorithmic evaluation systems reduce bias and increase consistency, or do they reconfigure and potentially amplify existing structural inequalities in more opaque ways?
How do data infrastructures, model design choices, and training datasets shape the enactment of fairness in research funding and for whom does this fairness hold?
By becoming embedded in routine funding work, algorithms participate in defining what counts as “good” research, acceptable risk, and legitimate impact. The talk analyzes how such algorithmic arrangements may stabilize particular futures of science funding in context of favoring standardization, fairness, predictability, and data-intensive forms of accountability. Drawing on Science and Technology Studies, the contribution explores how these emerging sociotechnical futures are performed in practice and how reflexive governance could counteract new forms of opacity, bias, and inequality. It argues that understanding algorithms as practice-shaping actors is crucial for designing funding systems that remain fair democratic, transparent, and socially responsive.
Paper short abstract
Analyzing Dutch grant applications before and after mandatory impact plans, this study shows how funding criteria shape promises of societal impact and reinforce a linear innovation imaginary that can incentivize systematic overpromising.
Paper long abstract
Across Europe, research funding systems are undergoing experimentation with new allocation mechanisms in response to critiques of traditional peer review and growing concerns about fairness, efficiency, and trust. One of the elements under consideration is the criteria used to evaluate research proposals, as they shape how scientists articulate the future value of their work. In many Anglo-Saxon funding regimes, applicants are required to specify scientific outcomes and societal or technological impacts in advance. These requirements encourage researchers to formulate promises about future benefits, often resulting in systematic overpromising.
This research examines how funding requirements shape such promises. I focus on the Dutch national research funder and analyze how researchers frame expected impacts in grant applications before and after the introduction of mandatory impact-plan frameworks. I argue that these requirements reflect and reinforce a linear innovation imaginary, in which scientific discovery leads predictably to societal impact, an assumption long criticized in the history and sociology of science for misrepresenting the uncertain and nonlinear nature of research and technological development.
By tracing changes in grant proposal language, the study shows how funding calls and evaluation criteria actively configure what counts as valuable research. In doing so, funding practices do not merely allocate resources but also shape collective visions of desirable scientific futures. I reflect on how funding agencies should balance legitimate expectations of societal relevance with the need to avoid incentivizing exaggerated promises. Designing more reflexive funding systems requires greater awareness of how evaluation criteria shape researchers’ narratives about future impact.
Paper short abstract
Research managers play a key role in the governance of research. Our study of the Swedish Energy Agency shows how they choreograph peer review, juggle multiple roles and imaginaries, and use tools ranging from Excel to AI, all with the overarching goal of managing epistemic and bureaucratic risks.
Paper long abstract
Research managers play an important role in the governance of research and innovation. In this study, we place emphasis on the work of research managers in funding agencies, with particular focus on how they navigate multiple imaginaries—of impact, excellence, and “bureaucratic impartiality”—in their daily practice. Accordingly, we ask: How do research managers organise project funding through calls and peer review processes? Which practices and techniques do they use in this process? And how do they influence funding decisions?
Our empirical case is the Swedish Energy Agency, a mission-oriented funding organisation in the area of energy and sustainable transition. We have been granted access to processes and practices through direct observations of review panel meetings and in-depth interviews. This allows us to follow the work of research managers—from formulating a call, through the evaluation process, to the communication of decisions and the follow-up of completed projects. Our interviews describe the research manager as a ”tour guide” who “fixes everything”—from evaluation procedures to meals and accommodation. Moreover, we find that research managers employ established tools, such as Excel spreadsheets, as well as novel techniques such as generative AI. A key element of their work is the management of risks of various kinds. These include epistemic risks and bureaucratic risks, such as avoiding bias and unfairness. Overall, we argue that research managers have a central function in the governance of research, and much can be gained by analysing their work in choreographing peer review and designing funding programmes.