Log in to star items.
Accepted Paper
Paper short abstract
This work explores the political consequences of the increasing homogeneity of Large Language Models through the lens of socio-technical imaginaries. It highlights the active role these models play in future-making, as they collapse collective perceptions of the future onto hegemonic imaginaries.
Paper long abstract
As Large Language Models (LLMs) increasingly mediate global knowledge production, empirical research indicates a convergence toward a narrow, homogenous subset of representations. This paper argues that this ‘generative monoculture’ is not a mere technical artefact, but a directional result of design choices. Irrespective of explicit intent, these choices project situated, hegemonic worldviews onto a global user base.
By framing this homogenization through the lens of socio-technical imaginaries, the paper analyses how LLMs act as vectors that direct collective attention and investments toward a constrained set of ‘futures worth building’ while discarding alternative perspectives as noise. Furthermore, I build on Gramscian theory to delineate the structural capacity of these systems to both unearth and stabilise specific imaginaries in current `wars of position'. Within this frame, the 'lossy' nature of language modelling is presented as a performative act of future-making that automates the fostering of spontaneous consent, reshaping the social order by positioning hegemonic imaginaries as the only 'common-sense' reality. Ultimately, LLMs can be understood not merely as tools of representation but as a medium that actively defines the future by narrowing the collective perception of what is possible.
When models act: Forecasting, automation and the politics of future-making
Session 2