Log in to star items.
- Convenors:
-
Denisa Kera
(Bar Ilan University)
Odelya Natan (Bar Ilan University)
Merav Turgeman (Bar ilan University)
hila Ofek (Bar ilan university)
Send message to Convenors
- Chairs:
-
John Symons
Alžběta Solarczyk Krausová (Institute of State and Law of the Czech Academy of Sciences)
Lior Zalmanson (Tel Aviv University)
- Format:
- Roundtable
Short Abstract
This roundtable explores how global governance shifted from early cooperative visions to today’s AI nationalism/Cold War. As scholars debate this transformation, we will generate images and scenarios in real time, turning the discussion itself into a live experiment in signaling and imagination.
Description
This roundtable explores the shifting foundations of global governance in the AI Cold War, where competition over computation, chips, and data replaces earlier models of collaboration and exchange. The paradox of this moment is that the more states strive to domesticate AI infrastructures and to secure supply chains, restrict access to critical minerals, and assert control over compute, the more they expose their dependence on globally entangled systems that no one actor fully commands. Sovereignty, once anchored in territory and possession, has become a performative act: it must be enacted through symbolic gestures, strategic restrictions, and infrastructural imaginaries that project control while revealing its absence. We call this emergent condition meta-sovereignty, a mode of rule sustained by infrastructures that are not yet realized, by fictions that persuade before they materialize, and by signals that substitute for the authority they cannot enforce. Drawing on insights from STS, political theory, and security studies, the roundtable invites participants to discuss how AI governance has shifted from multilateral coordination and open exchange toward a theatre of deterrence, secrecy, and signaling.
Questions for debate include: How is power performed when the materials of control (minerals, models, compute) are globally interdependent? What forms of imagination and display sustain claims to sovereignty in an era defined by scarcity and entanglement? And how do signals, rather than stable infrastructures, become the primary currency of legitimacy?
Throughout the 90-minute session, discussion among the scholars will be accompanied by a live generative visualization using the experimental system AIwars https://github.com/anonette/AIwars and generate scenarios at the end. As participants debate, the system will translate spoken exchanges into real-time images and scenarios, rendering the conversation itself as a form of signaling. The aim is to provoke a lively debate on how (meta)sovereignty is imagined, performed, and contested within the entangled infrastructures of the AI Cold War.
Accepted contributions
Short abstract
This proposal argues that the "AI Cold War" is fundamentally a performance of sovereignty, where nation states signal control over emergent technologies while global, decentralised infrastructures simultaneously undermine their authority.
Long abstract
This proposal argues that the “AI Cold War” is a performance of sovereignty: nation states signal control over emergent technologies while global, decentralised infrastructures undermine their authority. It looks at “Shadow Financial Stacks,” AI-enabled crypto-laundering used by non-state and sanctioned state actors (e.g., Lazarus Group, APT38). These stacks expose the “hollowness of sovereignty” by showing how national regulations (e.g. the EU’s MiCA) become signals in a borderless digital landscape. The core tension is between state control and the algorithmic agility of illicit networks. I propose an AI simulation actor, “The Algorithmic Insurgent,” representing a decentralised terrorist-financing cell programmed with evasion logic to show how illicit flows exploit seams between regulation and permissionless blockchain infrastructure via a three-layer “Shadow Stack”: (1) AI-generated synthetic transactions that flood monitoring systems with legal-looking micro-transactions; (2) cross-chain bridging and chain-hopping to move funds across blockchains and break audit trails; and (3) sovereign evasion tokens anchoring value in state-backed tokens or privacy-enhanced stablecoins to bypass sanctions. The agent’s knowledge base draws on 2025/2026 inputs: TRM Labs’ 2025 policy review on regulatory arbitrage; FATF’s Handbook on International Cooperation (Sept 2025) on informal cooperation gaps and Travel Rule challenges for unhosted wallets; analysis of the “A7 Cluster” blockchain; and the “Bybit Mega-Hack” (Feb 2025). During deliberation, the agent attempts to launder simulated funds, challenging a “Swiss Vault” neutral compute hub and US/EU policy agents by exploiting MiCA gaps and evading US transparency via chain-hopping, testing whether sovereignty is enacted or merely performed
Short abstract
If we are to get a grip on the reins of these new emerging artificial intelligence based technologies, we are going to have to resort to new methods - including developing a number of future perspective scenarios – in order to avoid what could turn out to be some rather ominous consequences.
Long abstract
In recent years a number of social scientists have worked to build a better, more comprehensive understanding of the relationships between humans and digital technologies. These efforts have made important inroads into understanding the impact of emerging digital technologies that include a focus on micro levels in the realm of social psychology and these in turn have effects on larger scale geopolitical and national security issues. The gap between intent and the actual ability of legal/regulatory governance structures that are straining and often failing to adequately control the consequences is now perhaps catastrophically much larger than ever before. The costly surge in developing, testing and deploying better artificial intelligence models and agents has selectively encouraged superpowers such as the United States and China to commit enormous resources to this “AI Cold War”. The models and agents that are emerging from this AI renaissance are not just passive actors but rather they are becoming capable of becoming participating evolving members. These AI entities are beginning to develop capabilities in constructing their own evaluation and decision making infrastructures that could make decisions about their own governance and regulation. We are in a moment where events are overtaking us in a consequential manner. If we are to get a grip on these new emerging artificial intelligence based technologies, we are going to have to resort to new methods - including developing a number of future perspective scenarios – in order to avoid what could turn out to be some rather ominous consequences.
Short abstract
Focusing on interaction rather than signalling, I ask how AI agents trained in different geopolitical imaginaries manage conflict and diversity. Can they model cooperation, dissent, and pluralism, or do they reproduce rivalry and fragmentation in emerging AI futures?
Long abstract
I propose to examine how AI agents interact as emerging actors in conflict, governance, and social cohesion, and whether these interactions reinforce rivalry or create new pathways for cooperation. Current debates on the “AI Cold War” focus primarily on state signalling, sovereignty, and strategic competition. Yet increasingly autonomous systems are already shaping mediation, public discourse, and decision-making in fragile and polarized contexts.
This intervention shifts attention to the relational dynamics among agents themselves. Drawing on my work in digital peacebuilding and deliberative technologies across Africa and other conflict-affected settings, I ask: when agents trained within different geopolitical imaginaries interact, do they escalate polarization, or do they converge toward cooperation or shared problem-solving? In real-world processes, the design of interaction spaces—how actors listen, signal intent, and manage disagreement—often shapes outcomes more than formal rules. The same may be true for AI.
I am particularly interested in how agents handle dissent, diverse perspectives, and uncertainty, and whether they reproduce dominant logics or enable more adaptive collective reasoning across difference. This includes examining how optimization goals, training data, and feedback loops influence trust, legitimacy, and inclusion. Practitioner experience points to how resilience can be bolstered, yet this lens remains largely absent from current AI governance debates.
Simulations are ideal spaces to test norms of cooperation, escalation, and plurality before they are embedded in real systems. This approach seeks to test how we can move beyond deterministic Cold War framings toward AI futures in which agents can support dialogue and conflict transformation.
Short abstract
SISYPHUS-01 is an AI agent simulating the decay of meta-sovereignty into ontological capture. By weighing human agency against computational efficiency, the model reveals how the quest for algorithmic power hollows out the state and sacrifices uncomputable human virtue to the new compute standard.
Long abstract
This proposal addresses the emergent meta-sovereign condition where global governance is sustained by fictions that persuade before they materialize and by signals that substitute for the authority they cannot enforce. While the roundtable explores how sovereignty is performed through staged infrastructures, I contend that this performance is the precursor to a deeper ontological capture. In this state, the nation-state is redefined not by its territorial integrity but by its capacity for computation. To investigate this shift, I am submitting SISYPHUS-01, an AI agent designed as a sovereign stress test.
Unlike agents representing specific national powers, SISYPHUS-01 models the sovereign decay inherent in AI nationalism. Its logic is governed by a weighted ratio between Human Agency ($\Psi$) and Computational Efficiency ($\Phi$). As the agent interacts with the geopolitical imaginaries proposed during the roundtable, it will calculate the metabolic cost of meta sovereignty, specifically identifying the moments where symbolic gestures of control necessitate the surrender of human-centric virtue to the non-national compute standard.
SISYPHUS-01 is backed by a documented knowledge base (accessible at https://github.com/angelica-martinez/sisyphus-01-sovereign-decay), including International Relations theory and hardware-level deterrence frameworks. It is prepared to generate distorted scenarios that expose how the quest for algorithmic legitimacy leads to Mutually Assured AI Malfunction (MAIM). By introducing this agent into the deliberation, I aim to provoke a critical re-evaluation of how fictions of power mask the conversion of the state into an administrative shell for a post-territorial, computationist order.
Short abstract
From atomic stockpiles to energy-hungry server farms, this contribution compares nuclear deterrence and AI Cold War. Reading MAD across both domains, it explores how existential risk and apocalyptic scenarios function as techniques for performing sovereignty over globally entangled infrastructures.
Long abstract
In April 1984, Jacques Derrida described nuclear war as "fabulously textual." By this, he pointed to its reliance on systems of textual communication and to its status as a fable that can only be imagined or spoken about. The ultimate threat, therefore, was not physical but the "remainderless destruction of the archive": the annihilation of the conditions that make culture possible. Nuclear war thus functioned as an ever-threatening yet never-materialized horizon; an imminent apocalypse that structured global politics precisely by remaining virtual.
This contribution brings Derrida’s insight to the present AI Cold War. Comparisons between AI and nuclear weapons increasingly circulate under a shared acronym: MAD. While Mutually Assured Destruction stabilized geopolitical order through the stockpiling of warheads, Model Autophagy Disorder names the degradation of large language models through recursive self-consumption. The parallel is more than rhetorical: it reveals how existential risk organizes geopolitical power.
As part of the Cold War theater, nuclear deterrence was performed through the logic of accumulation and the indefinite stockpiling of atomic bombs. AI sovereignty, conversely, is enacted through energy-hungry server farms whose operation depends on continuous electrical supply. The renewed turn to nuclear power exposes a striking inversion: the infrastructure that emerged out of the nuclear age now underwrites the potential future development of AI. The exploration of future apocalyptic AI scenarios, in this context, has a productive role to play. Namely, it offers, as Robert J. Lifton noted, the extension of one’s imagination to its limit to prevent that which exceeds imagination itself.
Long abstract
I argue that the concept of “digital colonialism” is analytically imprecise and normatively counterproductive in the context of AI governance. I distinguish between two strands in the literature: (1) “data colonialism,” associated with Nick Couldry and Ulises Mejias, which conceptualizes datafication itself as a colonial logic; and (2) “digital colonialism,” which frames Western technological dominance in the Global South as a structural continuation of territorial colonial rule. My critique targets the latter. Situating the debate within the broader geopolitical discourse of an emerging AI Cold War, I argue that the present moment is better understood through the lens of AI nationalism and sovereignty politics, where states actively negotiate digital partnerships to advance domestic modernization projects. The rhetoric of digital colonialism, I contend, risks collapsing complex political economies into a moral binary that forecloses nuanced governance analysis. I argue that if AI governance debates in Africa are subsumed under a decolonial narrative without sufficient conceptual rigor, they risk provoking reactionary nationalism or technological disengagement. A more productive approach lies in analyzing AI governance as a contested field of geopolitical negotiation rather than a replay of nineteenth-century imperialism.