Log in to star items.
Accepted Paper
Paper short abstract
This paper provides an ethnographic account of how Artificial Intelligence (AI) is corroding the ethical landscape of militarism, taking Israeli intelligence units' embrace of AI-assisted targeting systems during the Israeli military's two year bombardment on Gaza as a case study.
Paper long abstract
This paper provides an ethnographic account of Artificial Intelligence's (AI) impact on the ethical landscape of warfare, taking Israeli intelligence units' embrace of AI-assisted surveillance and decision support systems during the Israeli military's two year bombardment on Gaza as a case study. My research draws from interviews with veterans of Israeli intelligence units and is in conversation with foundational social theorists' who offered early warnings of integrating AI into warfighting, from Hannah Arendt to Norbert Wiener. As such, I chronicle how the military's use of big data and machine learning systems to surveil, target, and kill corrodes soldiers’ capacities for ethical decision-making and moral reasoning in wartime. My research shows how the stack of algorthmic systems mediating military operations distances soldiers from the immediate violence of warfare. Military personnel tasked with coordinating airstrikes access the battlefield through key word search queries and algorithmically translated transcripts. In turn, killing unfolds ever more rapidly at an increasingly larger scale, resulting in skyrocketing civilian casualties. As Western militaries are investing record amounts of capital and labor to develop and deploy similar systems, my writing situates Israel as a critical case study to apprehend the de-skilling effects military AI systems.
Anthropology of Artificial Intelligence and Oppression
Session 1