Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Erik Reichborn-Kjennerud
(Norwegian Institute of International Affairs (NUPI))
Lilly Muller (Cornell University)
Send message to Convenors
- Chair:
-
Jutta Weber
(University Paderborn)
- Discussant:
-
Jutta Weber
(University Paderborn)
- Format:
- Traditional Open Panel
- :
- NU-4A45
- Sessions:
- Wednesday 17 July, -
Time zone: Europe/Amsterdam
Short Abstract:
How to make sense of contemporary epistemological transformations in relation to so-called artificial intelligence which currently underpins the imaginaries and practices of security and military assemblages, the worlds they produce, the modes of power they allow for and the violence they engender?
Long Abstract:
How can we make sense of the contemporary mobilization of so-called artificial intelligence into security and warfighting assemblages? This panel seeks to move the conversation on AI in military and security practices beyond the ‘what ifs’ of killer robots, the ethics of autonomy and the fallibility of sociotechnical systems towards how militaries and the larger security apparatus are making sense of and providing meaning to the operational environment through algorithmic processing of massive quantities of data. At stake here is not only the automation of warfare or security practices, but a world that is increasingly rearranged and designed in ways in which the only rational way of thinking about politics and security is through military supremacy, violence, continuous operations, and global domination. Interested in the “closed world” of martial epistemologies, we are looking for critical interventions that examine the imaginaries, operative logics, and sociotechnical practices of “machineries of knowledge production”. Shifting focus from killing to knowing, and from perception to intelligibility we are calling for open, innovative, and experimental investigations that provide novel insights into the historical constitution, present operational and future ramification of the epistemological transformations underpinning martial practices, the worlds they produce, the modes of power they allow for and the conflict and violence they engender. While we are looking for broad empirical inquiries into AI and the question of martial practices, these can, but are by no means limited to traversing the following questions:
How can we rethink critique, practice, and knowledge production amidst generative AI? How can we challenge the technopolitics of martial imaginaries and the militarism they engender? How should we understand and analytically engage with the design and engineering of particular epistemic configurations? How do novel machine learning systems transform epistemic operations, and to what effects?
Accepted papers:
Session 1 Wednesday 17 July, 2024, -Short abstract:
This article blends STS methods and expert interviews to challenge the notion that algorithmic technologies “increase certainty” of military decision-making. I unpack how certainty is co-constructed through pattern recognition, accuracy rates, verification procedures, and human judgment.
Long abstract:
Industries assert that novel algorithmic applications can unveil the fog of war and enhance certainty in military decision-making. Driven by the logic of risk reduction, militaries seek algorithms as solutions to mitigate uncertainties in warfare by organizing, categorizing, correlating, and making contemporary wars knowable in novel ways.
However, the use of algorithms to allegedly minimize the uncertainties of warfare introduces a new dimension of uncertainty into this sociotechnical assemblage. This dimension is related to the incomprehensibility of algorithmic processing – the so-called “black box” problem – and the evolving nature of these systems in ways that make it challenging to predict on what basis and how recommendations are made. To reinsert certainty into those systems, new procedures of verification and validation are developed to quantify the accuracy and reliability rates of algorithms.
Using STS-inspired methods, I study ‘uncertainty’ as a product that emerges through, in, and out of combined agencies of algorithmic processing, interface design, military professionals, military institutional logic and procedures, technologists, and legal frameworks. I blend desk research with expert interviews to grasp the practical implications of using software applications in the military context. With this methodology and focus, my piece contributes to an expanding body of literature in critical AI and international law studies that challenge the objectivity of algorithmic tools and examine with curiosity how new techniques and procedures provide novel non-legal ways of legitimizing their use in warfare.
Short abstract:
This paper critically examines the UK governments use of the imaginary of AI in its effort to position itself in what it sees as a new domain to shed light on the epistemological transformations in military and security practices in the UK that follow.
Long abstract:
The UK government sees itself as a global superpower in so-called Artificial Intelligence (AI). This paper critically examines the UK government's use of the imaginary of AI in its effort to position itself in what it sees as a new domain. To do so the paper examines what imaginaries underpin the different parts of the UK government security and military assemblages, what worlds they produce, the modes of power they allow for. Tracing the different imaginaries produced through public strategies, working manuals, handbooks and guides as well as interviews with officials and consultants who worked on the cybersecurity AI documents the paper traces the emergence, contestation and stabilization of the imaginary of AI in the UK government cybersecurity and military practice. With the goal to make sense of the contemporary mobilisation of AI into UK military and security strategies, this paper seeks to contribute to the move beyond the ‘what ifs’ of offensive/defensive debates, the ethics of autonomous AI and the fallibility of sociotechnical systems towards how the military and the larger security apparatus are making sense of and providing meaning to the operational environment. By critically examining the imaginaries that are presented of AI by different government and military entities and the contestations in their making the paper produces novel insights into the historical constitution, present operational and future ramifications of the epistemological transformations that underpin the UK cybersecurity practices, the worlds they produce, the modes of power they allow for and the conflict and violence they engender.
Short abstract:
This paper explores how we can understand and engage critically with contemporary initiatives in the incorporation of Large Language Models (LLMs) into military warfighting assemblages.
Long abstract:
This paper asks how we might make sense of contemporary initiatives in the incorporation of Large Language Models (LLMs) into warfighting assemblages. In promotional scenarios, LLMs are envisioned to automate – and accelerate – the generation of so-called Courses of Action (COA) or plans for operational command. Empirically the paper is based on a reading of Palantir’s battlefield management system called Artificial Intelligence Platform for Defence (AIP), Scale’s “AI Digital Staff Officer” Donovan, and DARPA’s novel warfighting concept Mosaic Warfare. Through an analysis of US military doctrines, we situate these systems in a long-standing Western – and in particular American – military imaginary that has placed ‘the battle’ at the center of how war is understood and practiced. By showing how these imaginaries are inscribed in Palantir, Scale and DARPA’s visions of LLM-enabled warfighting, we trace how the automated generation of ‘objective’ knowledge about the enemy is fundamental to the martial dream of immersing analysts and operators in data worlds. Mapping the ‘end-to-end’ generation of COAs, from data production and analysis to decision-making, we argue, raises critical questions regarding the automation of military (operational) logics. Within these automated processes of trial and error is a highly distributed and messy agency, which paradoxically reproduces the logics of militarism at the same time that warfighting is made faster, deadlier, and less controllable.
Short abstract:
Rather than a focus on the phenomena of AI itself as though it is a clearly bounded or self-contained object, this paper will make the case for approaching such technological systems at their ‘interfaces.’
Long abstract:
Rather than a focus on the phenomena of AI itself as though it is a clearly bounded or self-contained object, this paper will make the case for approaching such technological systems at their ‘interfaces.’ Drawing on lessons and insights from cryptography practices during WW2, and comparing with contemporary military imaginaries, operative logics, and sociotechnical practices of networked and data systems as “machineries of knowledge production,” this paper will draw out a novel theory of interfaces as intermedia that will provide both conceptual and methodological insights for security scholars studying the interstices of military technologies (here in the form of signals, datasets and the ether) and security. While ‘interfaces’ may suggest self-evident boundaries, such as between humans and technology, or as contact zones between different mediums, by drawing on STS this paper will instead show that “rather than looking for boundaries of things, instead we must look for things of boundaries” (Abbott, 1995; Gieryn, 1999). By rethinking distinctions between old and new media, interfaces can thus work as both site and method in ways that help analysts trace how these machineries have been constituted historically, and thus challenge presentist assertions about the radical novelty of contemporary technological ‘revolutions.’