Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenor:
-
Jack Stilgoe
(University College London)
Send message to Convenor
- Stream:
- Discovery, discussion and decision
- Location:
- Bowland North Seminar Room 7
- Start time:
- 25 July, 2018 at
Time zone: Europe/London
- Session slots:
- 1
Short Abstract:
Machine learning is advancing rapidly, accompanied by grand promises of hype and doom. The everyday applications of machine learning are already to be found in our smartphones and our homes and, soon, in self-driving cars. But who is doing the learning?
Long Abstract:
Machine learning is advancing rapidly, accompanied by grand promises of hype and doom. Self-driving cars have become a test case for the efficacy of machine learning. But this quintessentially 'smart' technology is not born smart. The algorithms that control their movements are learning as the technology emerges. Self-driving cars represent a high-stakes test of the powers of machine learning, as well as a test case for social learning in technology governance. Society is learning about the technology while the technology learns about society. Understanding and governing the politics of this technology means asking 'Who is learning, what are they learning and how are they learning?' Trajectories and rhetorics of machine learning in transport pose a substantial governance challenge. 'Self-driving' or 'autonomous' cars are misnamed. As with other technologies, they are shaped by assumptions about social needs, solvable problems, and economic opportunities. Governing these technologies in the public interest means improving social learning by constructively engaging with the contingencies of machine learning.
The popular debate about machine learning focuses on what is being learnt. However, the politics of these technologies are likely to revolve around alternative questions: who is learning and how? STS has the potential to inject social learning into what is currently a narrow debate about machine learning.
Accepted papers:
Session 1Paper short abstract:
Machine learning algorithms claim to learn from raw data. I show, following a field observation in a robotic lab, how algorithms instead are made learning by tinkering and ask about the consequences for their application.
Paper long abstract:
Drawing from ethnographic field studies in robotic labs I want to tackle the addressed question how machines are (made) learning. Following the implementation of a machine learning algorithm into the software of a robot that is prepared for a robotic competition my presentation will show the underlying effort of the robotic scientists that is needed for the algorithm to function. The involved tinkering of the learning input and the algorithm itself deconstructs the myth that surrounds the hype of machine learning partially.
Going from there I want illustrated how specific forms of knowledge (here the expertise of robotic engineers) are inscribed into the algorithms of robots and what consequences a thereby biased algorithm has for contexts of robotic application like security, surveillance or elderly care.
Paper short abstract:
Reframing machine learning in terms of responsible innovation allows us to focus on who is doing the learning and how.
Paper long abstract:
Machine learning is advancing rapidly, accompanied by grand promises of hype and doom. Self-driving cars have become a test case for the efficacy of machine learning. But this quintessentially 'smart' technology is not born smart. The algorithms that control their movements are learning as the technology emerges. Self-driving cars represent a high-stakes test of the powers of machine learning, as well as a test case for social learning in technology governance. Society is learning about the technology while the technology learns about society. Understanding and governing the politics of this technology means asking 'Who is learning, what are they learning and how are they learning?' Improving social learning by constructively engaging with the contingencies of machine learning. The popular debate about machine learning focuses on what is being learnt. STS has the potential to inject social learning into what is currently a narrow debate about machine learning.