Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Contribution:

Technologies of the planetary future: superintelligence, safety and world order in the United States and Britain  
Apolline Taillandier (University of Cambridge)

Send message to Author

Short abstract:

AI safety expertise performs visions of future AGI that centre the planetary, rather than human life. This paper examines how it entails visions of global politics, both remaking and strengthening dominant visions of world order.

Long abstract:

Advocates of AI safety claim that future AI capabilities may pose an existential risk to humanity, and that scientists, governments and technology companies should ally in paving the way to a desirable future with artificial general intelligence (AGI). This paper examines how technology companies, AI researchers, philosophers and philanthropists have contributed to the making of superintelligent AI futures since the early 2010s through a combination of public science, technology scenarios, risk-assessment tools, moral theories, machine learning models and highly abstract mathematical formalisations of AGI, showing how contestations around, and stabilisations of superintelligence scenarios contribute to recasting transhumanist, science fiction and computer science imaginaries of machine intelligence explosion and singularity. Specifically, the paper studies how AGI futures contribute to the recasting of world order visions, as evidenced in ideas of ‘global superintelligent Leviathan’ (Bostrom 2014) and planetary ‘stack’ (Bratton 2015). Departing from common understandings of AGI as an anthropocentric project, I argue that non-human life (on earth or on other planets), resource depletion, and the long-term habitability of the cosmos constitute in fact central themes in the AGI and superintelligence discourse. Tracing these within ethical debates about existential risk and extinction from the 1960s on, I contrast current visions of AI catastrophe with prospective scenarios of epochal transformation such as nuclear winters and extreme Malthusian conditions. I show how AI safety expertise centres the planetary as the relevant scale of AI making and political intervention requiring unprecedented coordination and integration, and how it contributes to destabilising and consolidating existing global hierarchies.

Combined Format Open Panel P115
Global socio-technical imaginaries of AI
  Session 2 Tuesday 16 July, 2024, -