Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenor:
-
Tamara Gupper
(Goethe University, Frankfurt am Main)
Send message to Convenor
- Format:
- Panel
- Sessions:
- Monday 6 June, -
Time zone: Europe/London
Short Abstract:
This panel addresses the role of the people involved in and responsible for the development of AI and robotics, as well as their perception of the respective technology.
Long Abstract:
The idea of creating objects that autonomously imitate – or even surpass – human capabilities is not new. While for the past few millennia, these stories were the stuff of legends, the last few decades have seen technological advancements that have made some of these ideas materialise. The defeat of human world champions in highly mediatised ‘man vs. machine matches,’ such as the one in chess between Garry Kasparov and Deep Blue in 1997, or the one in the Asian board game ‘Go’ between Lee Sedol and AlphaGo in 2016, are probably some of the more spectacular examples of this. Systems that include AI components have, however, also found their way into many peoples’ homes and daily routines, including for example product recommendation systems, automatic translation software, or robot vacuum-cleaners.
So far, most research on AI and robotics from a social science perspective has focussed on either fictional representations of the technology, or the users’ perspective on it. But what about the people behind these technologies? Who are they, and how do they perceive their work, the technologies they (help) develop, and the impact these technologies might have in larger society? What processes are at the basis of the development of AI and robotics, and what social and material contexts are at play? This panel invites contributions from a wide range of disciplines which address the people involved in and responsible for the development of AI and robotics, as well as their perspective on the technology.
Accepted papers:
Session 1 Monday 6 June, 2022, -Paper short abstract:
The present research examines the knowledge production processes of start-up founders, entrepreneurs, and venture capitalists in their work of creating and investing in artificial intelligence technologies in Taipei, Taiwan.
Paper long abstract:
Based upon ethnographic research in Taipei, Taiwan, the present research examines the way in which start-up founders, entrepreneurs, and venture capitalists understand themselves in their work of creating and investing in artificial intelligence technologies by focusing upon knowledge production.
Taiwan’s history as a global hardware manufacturer forms background for collaborations between start-up companies, accelerators, venture capital firms, big technology firms, and the state in this period of accelerated growth of Taiwan’s high-tech industries. The research explores organisational principles of AI startup companies, accelerators, incubators, and ecosystem builders aiming to apprehend the logics present in this ecosystem.
This exploration of the thinking around technological devices is intended to disrupt the assertion that technology exists external to human subjects and social relations. In contrast, the research focuses upon the logics engendered by the collaboration of human activity in the production of technological devices emphasising the role of knowledge production in spaces of innovation in artificial intelligence.
Paper short abstract:
The paper analyzes the epistemic culture of the field of social robotics. At the core of the interest are three groups of "epistemic practices" of engineers, that alternate in their different abilities to deal with social complexity: Laboratization, Staging, and Proto-Ethnography.
Paper long abstract:
Through the intended use in everyday life-worlds, social robotics becomes a discipline like architecture or product design, in which scientific, technical, political, social and aesthetic expertise intersect. Engineers who have social worlds the subject of their work, inevitably become proto-social researchers themselves.
How and by which means do researchers in social robotics deal with this? In order to investigate this, I have examined activities of social robotics in an ethnographic study (Bischof 2017). I found three groups of "epistemic practices", each of which mediates in a typical way between resistance and adaptation in developmental practice: Laboratization, Staging, and Proto-Ethnography.
Only in their interplay do the reconstructed "epistemic practices" become creatively effective for the field of social robotics. At their core, they alternate in their different abilities to deal with social complexity. An interplay of (temporary) exclusion from as well as striving for (re)entry into the complexity and contingency of social worlds occurs. The idea for a robotic application may originate from an everyday observation of the researchers, but is then transferred to an isolated laboratory scenario in order to generate a measurable effect. A subsequent exploratory user study of robotic behavior 'in the wild', can then in turn open up development practice to new complexities and contingencies. The data and machines generated in this way are then presented or circulated as a video.
This only becomes visible with an analytical perspective that does not distinguish a priori between the supposedly 'actual' research work and practices that enable and limit it.
Paper short abstract:
This paper focuses on humans behind an Indian chatbot-based mental health app. I trace the trajectories of designers, programmers and psychologists, carve out their techno-optimism, socio-technical imaginaries and moral-economic aspirations, and describe sociotechnical becomings.
Paper long abstract:
The Covid-19 pandemic and the associated disruption of everyday lives have not only exacerbated a global mental health crisis but also led to new opportunities and markets for AI-based mental health (self-) care tools. Mental health apps act as “band-aid” solutions for people who struggle to manage their everyday lives, helping them to live with stress or overcome emotional challenges. Their designers and programmers make multiple assumptions about wellbeing, human-techno-relations, and the contexts or the forms of life in which these aids are put to use. Using the case of an Indian-developed chatbot-based mental health app, this paper focuses on humans behind AI. Based on fieldwork and interviews with the app’s designers, programmers and psychologists, most of them female, in a start-up of Bangalore, I first trace their trajectories and carve out their techno-optimism, socio-technical imaginaries and moral-economic aspirations. What are the therapeutic approaches and assumptions about human suffering behavior and agency that they inscribe into the app? How do universalism, particularity and India as a location matter for a well-known app that targets users globally? Next, I delineate processes of ‘sociotechnical becoming’ through joyful experimentation, tinkering and affective work as programmers and psychologists engage in rewriting codes and contents. These processes reveal design and care as distributed amongst engineers, psychologists, users and algorithms. By bringing the assumptions, values, practices and social relations of AI-designers to the fore, I show how deeply the social is inscribed into AI-based health technologies, which, I argue, can be both problematic and an asset.
Paper short abstract:
Automation often appears simply to transfer labour from humans to machines, but automation involves a rearrangement of social conditions. This paper looks at how the clinical practitioner’s role becomes blurred with that of technician when working on automated mental health software.
Paper long abstract:
What happens to the clinical practitioner, and to treatment, when mental health treatment is automated using computer software? It may appear that the practitioner is no longer required due to the technology assuming their role. The practitioner is not quite eliminated however; mental health apps which automate treatment displaces rather than replaces the clinical practitioner, whose role becomes that of technical operator.
This achieves ‘operational autonomy,’ and allows the operator, through the mediation of the software, to act upon individual subjects or groups as they appear as incorporations of various manipulable abstractions - datasets. Treatment software can be reconfigured according to the responses given by users through patient outcome forms in order to improve treatment, this ‘adjustment of means to ends’ gives the operator a sense that the outcome can be reached through manipulation of the software itself. Treatment becomes a technical action of adjustment.
What becomes of the patient then, when the treatment expertise of the clinician is remoulded into that of technical operator? This paper will end with a discussion about how mental health, when treated as a technical system, becomes, on one hand, under the control of the individual being treated, and on the other hand, removed from their subjective experience.