Log in to star items.
- Convenors:
-
Julio Paulos
(ETH Zürich)
Ignacio Perez (University of Oxford)
Send message to Convenors
- Format:
- Traditional Open Panel
Short Abstract
Models have long been central to how societies govern their futures. From climate scenarios to forecasts, they turn complex dynamics into action. Today, digital twins, LLMs and simulations not only predict but act, transforming modeling into a politics of future-making.
Description
Models have long been central to the ways societies imagine, plan and govern their futures. From epidemiological curves and mobility simulations to climate scenarios and economic forecasts, they have served as technologies of future-making, representing complex dynamics in order to anticipate and prepare for what may come. Through dashboards, indicators and simulations, abstract futures are rendered actionable in the present, turning anticipation into administration.
Today, however, we are witnessing a profound reconfiguration of what models are and what they do. Digital twins, large language models (LLMs), real-time dashboards and generative simulations no longer merely represent reality from a distance but increasingly intervene within it. A digital twin of a city no longer simply forecasts flood risks but automatically adjusts water infrastructure; a mobility dashboard not only visualizes congestion but redirects flows as they emerge; an LLM no longer summarizes expertise but drafts policy texts that circulate and persuade. Models, once instruments of knowledge, are becoming sociotechnical actors in their own right, devices that both describe and perform the worlds they claim to represent (Latour 1987; Suchman 2007).
This shift from predictive and prescriptive, to automated and generative marks a new stage in the politics of modeling. Forecasts that once invited reflection now execute interventions. Promises of efficiency and resilience coexist with risks of bias, opacity, and the erosion of judgment. Dashboards promise transparency even as they delimit what counts as knowledge, concealing the infrastructures of data, computation, and labor that sustain their authority (MacKenzie 2021).
We situate these developments within a wider landscape of technologies of future-making, including scenario planning, foresight and road mapping, which do more than represent. They configure imaginaries, orient decisions and redistribute authority. Futures, in this view, are not merely described but actively constituted.
Accepted papers
Session 1Paper short abstract
This paper examines how planning in Switzerland is being reconfigured through digital and increasingly automated infrastructures, focusing on how software environments and emerging AI logics reorganise environmental governance.
Paper long abstract
Urban planning is increasingly conducted through digital geoportals where heterogeneous datasets—zoning regulations, public-law restrictions on landownership, biodiversity indicators, noise exposure, air pollution, light emissions, or heat risks—are layered, queried, and visualised at parcel level. While these infrastructures promise more integrated environmental knowledge, it remains unclear how such datasets actually become relevant for planning decisions.
This paper explores this question through the case of the geoportal ecosystem used in the canton of Zürich. The platform already hosts extensive environmental and biodiversity layers, yet the presence of data does not automatically translate into planning relevance. Environmental datasets often remain contextual information rather than binding considerations in land-use decisions.
To examine this tension, the paper approaches geoportals as boundary infrastructures that coordinate different professional and institutional worlds—planning authorities, GIS specialists, ecologists, data providers, and consulting firms. Rather than simply making knowledge visible, these infrastructures shape how datasets can travel across domains and become admissible within planning procedures. Standards, metadata requirements, validation rules, scale choices, and interface defaults all influence whether environmental data can move from “background layer” to decision-bearing argument.
Empirically, the analysis combines document analysis, platform walkthroughs, and exploratory interviews with planners, GIS specialists, and ecological data experts involved in Zürich’s planning system. The paper traces the moments where environmental information is translated, reformatted, or filtered in order to become actionable within planning processes.
By focusing on these translation sites, the paper asks how digital infrastructures participate in defining what counts as relevant environmental knowledge in contemporary land-use governance.
Paper short abstract
Over-reliance on the promises of digital twins blunts our ability to flexibly react to a changing environment. This research investigates how large-scale digital twins of the Earth actively reconfigure the future; ultimately making us less resilient by narrowing the scope of climate interventions.
Paper long abstract
Digital twin (DT) technologies emerged out of manufacturing and engineering industries, designed to link an object, product, or system – the ‘physical twin’ – with a virtual ‘digital twin’ – an identical, mutually-affecting representation that updates in real time. In environmental contexts, they are increasingly applied as a technological innovation that promises to solve the interlinked crises of biodiversity loss, climate change, and pollution by allowing us to observe (and, in theory, avoid) undesirable potential futures. Through creating AI-enabled, multi-scale, continually-updating models, digital twins of Earth system processes aim to improve resilience; informing governmental decision-making processes, forecasting future climate events, and helping with resource allocation. Whilst they have the potential to be impactful at local scales, large-scale digital twins of the Earth actively reconfigure how we envisage and engage with the future by changing our approach to planning and prediction. They only forecast what we already know to model, creating a form of narrow resilience. ‘Informed’ becomes ‘outsourced’ in decision-making, with interventions targeted to solve specific ‘what-if’ scenarios, turning attention away from supporting adaptability to cope with both expected and unforeseen situations. In this, they act as ‘blinkered oracles’; revered actors that offer the myth of a certain future. This research draws on findings from a multi-sited ethnography, combined with literature from science and technology studies, responsible innovation, and risk and disaster reduction, to consider how digital twins of the Earth create potential futures that we privilege over the very real present.
Paper short abstract
This talk examines a recent energy system and climate forecast produced for the Finnish policy-making community. Further, it examines the politics of the modellers involved in producing the report, which anticipates a policy failure.
Paper long abstract
Energy system and climate models play a central role in evidence-based policy, integrating technical, economic, and operational knowledge to inform political decision-makers. This talk reports a field study with experts involved in Finnish climate and energy policy modelling, focusing on two themes: failure to model, and models projecting policy failure. The experts produced the report "New Measures and Scenarios for National Energy and Climate Policy" (KEITO). In a nationally visible public debate, KEITO demonstrated that Finland's goal of reaching carbon neutrality by 2035 is expected to fail — attributing this to the current government's insufficient implementation of the climate law and politically sanctioned large-scale forest logging.
We theorise KEITO's models and scenarios as an epistemic object produced by a distinct modelling community with its own epistemic culture: advisory science at state research institutes. We treat this object as an act of world-making. While ostensibly a value-free representation, we are interested in the modellers' preconceptions about the political dimensions and reception of their findings. We ask: to what degree did modellers understand their report as an instrument rather than a representation, and what did it mean to them that the model predicted a policy failure? Earlier studies in energy modelling (e.g. Silvast et al., 2020, 2023; Vergo et al., 2026) show that the central professional rationale of such models is producing legitimacy for evidence-based decisions. We examine what judgments supported the report's scenario-building and how uncertainty was communicated as a softening element within its narrative.
Paper short abstract
This paper examines how synthetic data and AI shape future oriented knowledge in humanitarian crisis forecasting. It shows how models fill data gaps to support proactive decisions while raising political, ontological and epistemological questions about how plausible futures are made present.
Paper long abstract
Two actual developments can be observed in the production of knowledge: First, knowledge is becoming increasingly datafied and algorithmised. Second, more knowledge is being collected in an anticipatory or predictive manner about previously unencountered situations. This knowledge may be relevant to decision-making for actors and organisations, but it remains highly speculative and contestable in ontological and epistemological terms, especially when taking their material backing into account. Enabled by the new methods of predictive algorithms and generative AI, synthetic data is increasingly being used to plausibly close existing gaps in forecasting processes that result from insufficient or non-existent data.
This rises two questions concerning synthetic data this contribution likes to address: (a) Which knowledge about the future is made ‘tangible’ or ‘durable’ in the present using synthetic data and its underlying models, and (b) which ontological and epistemological challenges arise and are discussed when synthetic data models are used to create “plausible futures”.
This contribution draws on empirical material from the investigation of a data-driven anticipatory action programme that is used for humanitarian crisis forecasting of scientific and political actors. It will be shown how synthetic data models are used by the actors to close data gaps to enable a shift from reactive to proactive and real-time crises management. Through the lens of material-discursive practices this contribution shows the entanglement of onto-epistemological challenges and its underlying politics in the production of future knowledge as well as the mediating role of these models between the speculative and the calculable.
Paper short abstract
This paper considers ontologies as a crucial prerequisite for computer models. Through a comparative study of the World3 and the LAWM models' material inscriptions in their source code and documentation, we show that the formation of what the model acts on is as important as how it acts.
Paper long abstract
This paper clarifies the role of ontologies in the future-oriented performance of models through a comparative analysis. A hybrid between technical data structures and conceptual framing, ontologies are implicit, static requirements of any simulation, often over-shadowed by their dynamic algorithmic counterparts. Yet, these "givens" are always what pre-determines the scope of any future-making processes.
Before asking "how does the model act?", we ask "what does it act on?". Be it humans who sort things out for cognitive convenience and social agency (Star and Bowker, 1999), machine learning which, to classify data, always requires categories to classify into, (Campolo & Schwerzman, 2023) or more traditional software systems, which depend on sound data structures to model the problem domain upon which procedures will act (Wirth, 1976), the question of what often precedes that of the how.
We consider the role of ontologies as a productive perspective on the work of models though a comparative study. First, we compare the ontologies at play in one of the most influential model in contemporary modelling science, World3 (Meadows et. al., 1972), with its staunchest critic, the Latin American World Model (Herrera, 1976). Through a material examination of the respective source codes, we argue that, starting from different premises, they inscribe fundamentally different political worlds.
We then upon up these findings to contemporary software systems, in particular the Foundry system of Palantir. From software documentations and company communications, we trace how ontologies allow an organization, through its software, to operationalize worldmaking as future-building.
Paper short abstract
Industrial digital twins establish new forms of pre-emptive modelling aimed at the simultaneous integration of remote control and prediction. Drawing on a technography of the Siemens Xcelerator platform, the paper examines how digital twin infrastructures stabilize regimes of logistical prediction.
Paper long abstract
Industrial digital twins establish new forms of pre-emptive modelling by aiming at the simultaneous integration of remote control and prediction in industrial practices and processes. Based on bidirectional data processing, models govern physical products, systems, and workflows. In doing so, digital twins stabilize regimes of logistical prediction in which models no longer merely represent futures but intervene in and operationalize them in real time within software infrastructures and global supply chains (Horn and Richardson 2025). Big Tech companies such as Siemens, Palantir, and Nvidia promote these technologies as technocratic forms of logistical control and prediction that seek to govern processes before they occur, thereby regulating the future in the present (Halpern 2025; Korenhof et al. 2023; Smith 2024).
Beyond techno-libertarian promises and market speculation, industrial digital twins consolidate infrastructures composed of sensor networks, global supply chains, algorithms, labour, and proprietary data centres. Adopting an infrastructural perspective, this paper examines how pre-emptive modelling through industrial digital twins is co-constituted by industrial software infrastructures. Methodologically, it draws on a technography of the Siemens Xcelerator Marketplace, a platform that integrates industrial software applications, digital twin models, and developer interfaces to sensor networks and cloud infrastructures such as AWS, Azure, and Nvidia Omniverse (Van Der Vlist 2024). Drawing on developer materials, platform documentation, and corporate publications, the paper analyses how Siemens Xcelerator structures labour hierarchies, operational logics, as well as imaginaries and epistemologies of modelling, thereby stabilizing specific infrastructures and temporal regimes of logistical prediction.
Paper short abstract
This proposal explores how economic forecasting models act by simulating the economy as a gendered body enduring shocks. It shows that by doing so, models frame sudden policy change as the only available political action, producing the need for constant economic monitoring.
Paper long abstract
Economic forecasting models function by simulating the economic future. Nowadays, such models rely on data inputs that are more and more frequent, and in return these models are increasingly relied on for decision making. Indeed, policymakers are continuously asking for economic expertise, thus contributing to shorten the delay between two runs of the model. In times of such modeling acceleration, the form and structure of these models matter greatly, as it directly impacts how they act on economic reality. Thus, this communication will explore how economic forecasting models act and what kind of decision making they produce, through a detailed analysis of one of the main French models for economic forecasting (operated by the French central bank). First, I will show that both in their structure and in the descriptions of their results, such models perform a conception of the economy as a gendered body enduring shocks. Then, I will show that the economic policies provoked by these models are centered around applying shocks to the economy in order to « wake her up ». Indeed, the models make inflicting economic shocks, i.e. abrupt policy change in one sense or the other, the only available political action. I will conclude by arguing that economic forecasting models act as pseudo-medical monitoring of the economy, producing repeated brutal policies as economic shocks.
Keywords: economic forecasting, models, expertise, gender, shocks
Paper short abstract
Based on ethnographic fieldwork with neuroscientists, cognitive scientists and roboticists, we develop the concept of 'speculative modelling’ to rethink speculation as both central to technoscience and as an open practice, beyond the ‘closed’ nature of prediction.
Paper long abstract
In this paper, we develop the concept of 'speculative modelling’ to understand encounters between practices of predictive modelling and the human modellers who tinker with them. While speculation typically denotes conjecture in a negative sense, we draw on feminist STS literature on speculative fiction to rethink speculation as both central to technoscience and as an open practice, beyond the ‘closed’ nature of prediction. Our contribution speaks to the panel’s concern with models as technologies of future-making by showing how speculative practices uneasily and unsteadily persist within, and potentially disrupt, the automation and ‘generativity’ of contemporary predictive systems, notwithstanding the push to erode such speculations.
Based on ethnographic fieldwork with neuroscientists, cognitive scientists and roboticists, we highlight how human modellers necessarily engage in speculative practices: from aiming to build models that ‘surprise’ their makers through their lack of predictability; simultaneously collapsing and expanding the difference between model and modelled; and recognising the limits of models by acknowledging modellers’ role in the emergence of models themselves. While scientists and engineers often acknowledge such speculative practices, they present them as preliminary to their scientific process, or as glitches to be patched over through ‘better’, 'more efficient', 'mathematically sturdy’ implementations.
Conversely, we focus on what speculation might mean for a critique of predictive and generative modelling. In framing speculation as a counterbalance to the ‘closed’ nature of prediction, we wish to re-open the multiple, alternative futures that lie in the space between models, their subjects, objects, makers, and interlocutors.
Paper short abstract
Digital twins such as Destination Earth turn climate modeling into future-making. By operationalizing what-if simulations and synthetic data, they collapse prediction and intervention, rendering planetary futures actionable in the present and reshaping how planetary futures are known and governed.
Paper long abstract
The emergence of digital twins challenges the premise that no computer can accurately predict the future. Digital twins are real-time, data-driven models of material systems that simulate their behavior under specific conditions. The data generated through these simulations is then used to modify their “material” twin, enforcing or preventing projected futures. In doing so, “a continuum between the physical and virtual worlds” (Crespi et al. 2023, 8) is created, transforming “physical objects into programmable entities” (Ibid.).
Initially developed in manufacturing, digital twins now scale to planetary dimensions. The most ambitious example is Destination Earth (DestinE), an EU-funded project developing digital twins for climate change adaptation. Here, “What-If Simulations” model hypothetical climate scenarios by varying parameters, generating synthetic datasets that inform interventions aimed at reshaping planetary processes.
I argue that digital twins collapse the distinction between prediction and intervention, enabling a mode of future-making in which futures are determined in the present. Designed to make projected futures actionable, they enact the futures they model. But how, through recursive entanglement with material processes, do they render planetary futures governable? What political and epistemological consequences follow once the management of planetary futures relies on synthetic data?
To address these questions, I analyze specific “What-If Simulations” conducted within DestinE’s Climate Change Adaptation Digital Twin during my research stay at ECMWF. Understanding digital twins as “operational images” (Parikka 2023), as visual phenomena which themselves act, I draw on “operational analysis” (Friedrich/Hoel 2023) to examine how future-making is enacted through technical operations in DestinE.
Paper short abstract
This work explores the political consequences of the increasing homogeneity of Large Language Models through the lens of socio-technical imaginaries. It highlights the active role these models play in future-making, as they collapse collective perceptions of the future onto hegemonic imaginaries.
Paper long abstract
As Large Language Models (LLMs) increasingly mediate global knowledge production, empirical research indicates a convergence toward a narrow, homogenous subset of representations. This paper argues that this ‘generative monoculture’ is not a mere technical artefact, but a directional result of design choices. Irrespective of explicit intent, these choices project situated, hegemonic worldviews onto a global user base.
By framing this homogenization through the lens of socio-technical imaginaries, the paper analyses how LLMs act as vectors that direct collective attention and investments toward a constrained set of ‘futures worth building’ while discarding alternative perspectives as noise. Furthermore, I build on Gramscian theory to delineate the structural capacity of these systems to both unearth and stabilise specific imaginaries in current `wars of position'. Within this frame, the 'lossy' nature of language modelling is presented as a performative act of future-making that automates the fostering of spontaneous consent, reshaping the social order by positioning hegemonic imaginaries as the only 'common-sense' reality. Ultimately, LLMs can be understood not merely as tools of representation but as a medium that actively defines the future by narrowing the collective perception of what is possible.
Paper short abstract
Models are routinely used in policy making to assist next-to-real-time decision making. Investigating models deployed in pandemic preparedness, the paper argues that modelling needs to be understood within the shift towards all-hazard planning and calls for a more reflexive use of these tools.
Paper long abstract
Models are boundary objects (Pieri 2021) and act as ‘travelling facts’ (Mansnerus 2015) often deployed across multiple stakeholders and in multi-agency contexts. Increasingly, they are routinely used in policy making, especially in emergency contexts and in situations characterised by high uncertainty and the need for rapid response.
By exploring how models are deployed in pandemic preparedness, this paper critically appraises the implications of their use in next-to-real-time decision making. It reflects on a range of examples, from Zika infection containment at global mega events to attempts at modelling individual behaviour in an outbreak via online gaming. The examples illustrate different strengths and weaknesses of these practices, black-boxed assumptions and problematic silencing of data limitations, once the models are ‘handed over’ to end-users.
In this paper I argue that we need to understand the increasing reliance on modelling against the backdrop of another key trend – the shift towards all-hazard planning. The move towards an ‘all-hazard approach’ to preparedness aims to achieve flexible strategies, as well as agility and transferability of plans from one crisis to the other (Pieri 2021). Nonetheless, the latter can often result in vague and underspecified plans of action. In this context, models are increasingly sought as tools that can translate projections into actionable plans.
I argue for a more considered and critical approach to these tools, with a view to achieving better ways of balancing the tension between agility of preparedness plans and keeping in sight the complexity of high consequence events, like pandemics.
Paper short abstract
The epistemic authority of the EAT-Lancet Report was derived from the complexity of its calculations. When the model acted, however, (i.e., when it prescribed a one-size-fits-all diet), the complexity of the food system the report sought to represent gave so many points of entry for contestation.
Paper long abstract
Data-intensive methods have become vital for getting epistemic purchase on global socio-ecological issues. Yet these methods are never fully insulated against contestation precisely because they seek to represent things that are so large and complex. This presentation will document a set of such contestations to reveal they can tell us about the performativity and and politics bound up in moments where models are made to act. It focuses on the EAT-Lancet report and its Planetary Diet. By promising to have comprehensively mapped the interactions between human and environmental health and food, the EAT-Lancet Commission attracted a slew of contestations. The presentation will explore why complexity and planetarity have become so alluring in science communication strategies, and it will interrogate the political economy that determines which representation–reality gaps get to become controversial. The contestations the EAT-Lancet report attracted arose, in part, because of conflicting understandings of complexity. Where the report depicted the food system as a romantic whole, those who challenged its findings did so by attesting to the food system's baroque irreducibility and the pervasiveness of context. When models act, these differences - between the sophisticated though inherently finite calculations performed by a model and the complexity of life in its whole; and between the model's view-from-nowhere and people's views-from-somewhere - can become extremely volatile.