Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Contribution:

Global risks and the organised irresponsibility of artificial intelligence  
Jack Stilgoe (University College London)

Short abstract:

As the promises of artificial intelligence attract growing social, political and financial attention, risks and responsibilities are being imagined in ways that serve the interests of a technoscientific elite.

Long abstract:

As the promises of artificial intelligence attract growing social, political and financial attention, risks and responsibilities are being imagined in ways that serve the interests of a technoscientific elite. In the UK and elsewhere, organisations are starting to institutionalise a mode of governance that presumes to know and take care of public concerns. And new research communities are forming around questions of AI ’safety’ and ‘alignment’. These particular (and, in my view, problematic) modes of responsibility are attached to a view of the technology and its teleology that is already overdue for an STS demolition. In my contribution, I will draw on research into public and expert attitudes, conducted during 2024 via surveys and as part of a BBC documentary on AI and existential risk, and reflect on my role as a proponent, analyst and actor in debates about ‘Responsible AI’.

Combined Format Open Panel P115
Global socio-technical imaginaries of AI
  Session 3 Tuesday 16 July, 2024, -