Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenor:
-
Tamara Gupper
(Goethe University, Frankfurt am Main)
Send message to Convenor
- Format:
- Panel
- Sessions:
- Tuesday 7 June, -
Time zone: Europe/London
Short Abstract:
This panel addresses the role of the people involved in and responsible for the development of AI and robotics, as well as their perception of the respective technology.
Long Abstract:
The idea of creating objects that autonomously imitate – or even surpass – human capabilities is not new. While for the past few millennia, these stories were the stuff of legends, the last few decades have seen technological advancements that have made some of these ideas materialise. The defeat of human world champions in highly mediatised ‘man vs. machine matches,’ such as the one in chess between Garry Kasparov and Deep Blue in 1997, or the one in the Asian board game ‘Go’ between Lee Sedol and AlphaGo in 2016, are probably some of the more spectacular examples of this. Systems that include AI components have, however, also found their way into many peoples’ homes and daily routines, including for example product recommendation systems, automatic translation software, or robot vacuum-cleaners.
So far, most research on AI and robotics from a social science perspective has focussed on either fictional representations of the technology, or the users’ perspective on it. But what about the people behind these technologies? Who are they, and how do they perceive their work, the technologies they (help) develop, and the impact these technologies might have in larger society? What processes are at the basis of the development of AI and robotics, and what social and material contexts are at play? This panel invites contributions from a wide range of disciplines which address the people involved in and responsible for the development of AI and robotics, as well as their perspective on the technology.
Accepted papers:
Session 1 Tuesday 7 June, 2022, -Paper short abstract:
The “Wizard of Oz technique” used in Natual Language Processing research aims to anticipate technological innovation in IA by simulating a program with a human substitute. Analysing the epistemic specificities of this protocol sheds a new light on ritualisation and non-human interaction in science.
Paper long abstract:
Research in Artificial Intelligence is based on epistemic specificities, which the great debates that have accompanied its development since the 1950s have expressed without, however, circumscribing them with precision. By focusing on scientific practices themselves and, in particular, on a form of experience that is specific to the field of natural language processing, this paper aims to demonstrate how the tension represented by bringing the machine closer to the human being has produced a form of ritualization usually incompatible with science. The “Wizard of Oz technique” is based on the interaction of a human with a simulated machine. Its purpose is to anticipate the progress of computer science and its consequences. Following through scientific publications the uses of this protocol since its apparition in AI labs during the 1970s, I analyse how, by playing the role of the program, the researcher materializes an unrealized innovation that refers to a non-human - the AI - whose future existence is projected and personified.
This research received the support of the Human at home project (Université de Montpellier, FEDER, Occitanie)
Paper short abstract:
This paper aims to provoke discussion about losing sight of what is corporeally unique to ourselves via persistent appropriation of the senses by robotics & AI. Building "bodies" that can “sense” things, this field seems to claim embodied intelligence as its own. Contrasts will be made with dance.
Paper long abstract:
This paper is intended to provoke discussion about how we may be in the process of losing sight of what is corporeally unique to ourselves, or the fading away of the corporeal in our human imaginary. The proposal is that this is occurring through the persistent appropriation of the senses by the field of robotics and AI. Arguably the origins of this can be traced to the work of the roboticist Rodney Brooks whose seminal papers including A Robot that Walks (1989) and Elephants Don’t Play Chess (1990) proposed the building of “robot control systems linking perception to action” and arguing for an alternative approach to AI grounded in “physical reality”. In other words, providing robots with the technical means to be reactive to their environment, giving them a “body” that could “sense” things, improving functioning through this feedback. These ideas have gained dominance in the field of robotics and AI and are now designated as the field of Embodied Intelligence. (Cangelosi et al 2015) In critiquing the implications of this development, I will refer to the work of anthropologist Lucy Suchman who, in her book Human-Machine Reconfigurations (2007), calls for a human imaginary that can tie humans and non-humans together without erasing the differences between them. I will draw on my own field studies in dance, including the artist Lisa Nelson whose work reflects how sensory skills are achieved through dedicated forms of dance practice, to suggest that the body is actually missing in the field of Embodied Intelligence.
Paper short abstract:
A dialogue between a roboticist and a robot-ethicist who are in search of the in-between space of human and machine, humanities and engineering, the impact of technology and the work that makes it a reality.
Paper long abstract:
Who are we? We are a collection of students and enthusiasts, novices and those making mistakes for a living, guided by experts who have way too many other things to do. We are engineers and learn by doing, getting it wrong and then trying again– until one of our getting-it-wrongs is deemed ‘good enough’. Then we graduate, or the product is moved forward, or we are moved on to something– somewhere– else.
We feel the pressure society puts on us to make science fiction. That no robot we make will satisfy public expectation, while each robot we create pushes us further into the future.
Who are the ‘rest of us’? Ethicists, knowledge-seekers, deeply invested beyond the skin and skull of human persons. We’re in search of connections, interrelations between the machine and the person, of the possibilities of the future that we were raised to imagine. We try to strike a balance between theory and practice in a way that those behind the technologies can understand while performing disciplinarity–, entangled in an evermore complicated web of duty, care, and promethean kinship.
This paper is a dialogue between a roboticist and a robot-ethicist who are in search of the in-between space of human and machine, humanities and engineering, the impact of technology and the work that makes it a reality. Utilizing the Socratic dialogue format, we explore the uncertain future of what it means to be a ‘good’ roboticist in post-human times.
Paper short abstract:
Developers on GitHub, the largest open source software platform (OSS) in the world, have a big infuence on the algortihmic present and how the "coded" future will look like. Therefore, a closer look on their beliefs and futuristic imagniaries may tell us a lot about emerging futures in the making.
Paper long abstract:
GitHub is the largest open source software (OSS) platform and probably contains all code that was ever written. Simultaneously, it is the new “Maschinenraum” (Engine room) of algorithm factories (Daum 2020). On GitHub new AI systems are developed, tested and distributed – thousands a day, by people from all over the world. Whatever “algorithmic future” there will be, it probably will be “forked” (Daum 2020) and produced on GitHub.
Following Appadurais definition of future „as a cultural fact“ (2013), I take a look on the future narratives and imaginations of the developers and contributors on GitHub: which futures are desired? Which images and visions of the world and the human being are narratively constructed? As I interview people from China, Taiwan and Germany, as well as from the US, also a specific intercultural question arises: is there something like a comparable “global future” they all imagine?
A first analysis shows, that two strong „socio-technological imaginaries“ (Jasanoff 2016) can be identified: the "greater good" and a codependent "Manichean good vs. bad" imaginary. The imaginaries are “populated” (Land 1998) by a strong techno-optimism and (neo)liberal beliefs of constant developement towards a better future. In combination with a hierarchical order on GitHub this leads to a coding environment, in which individual developers emerge as ‘Benevolent Dictators for Life’ (BDFL). This specific GitHub environment fosters and relies on a greater narrative, which I trace back to Appadurais (2013) concept of "trajectorism", the great meta trap of "the West".