Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Philip Garnett
(University of York)
Tom Stoneham (University of York)
Zoe Porter (University of York)
Darren Reed (University of York)
Send message to Convenors
- Chair:
-
Philip Garnett
(University of York)
- Format:
- Traditional Open Panel
- Location:
- Auditorium, main building
- Sessions:
- Thursday 18 July, -
Time zone: Europe/Amsterdam
Short Abstract:
As development rushes towards increasingly sophisticated autonomous machines, how such systems will impact human life is often overinflated or dismissed. AI safety is also rarely central to design, despite a growing focus on safety in policy discourse and awareness of the unintended consequences.
Long Abstract:
After a number of what could be loosely described as false starts, society is at the point of transformation by autonomous systems, robotics, and AI. The capacity of autonomous systems to act independently has reached the point where such systems take an increasing role in society alongside, and perhaps at the expense of, humans. Autonomous systems look like they will also become integrated into the creative and social side of human society - perhaps transforming human society into the sphere of cybernetics. However, relative to the attention paid to the technical aspects of the development of autonomous systems, the safety of such systems is arguably a secondary consideration. A situation that is at least in part and consequence of the ‘move fast and break things’ cultural dynamic of Silicon Valley.
This panel looks to the opposite and asks what role societies can play in shifting away from the ‘tech-fix’ to a more relational approach. In this way, we ask how we can ‘move slower and fix things’ when the rush to increase the autonomy of machines is at the expense of human autonomy, putting humanity in danger of losing sight, and losing control, of its own autonomy. In this panel we will challenge both the utility of the term ‘autonomy’ and what must be accounted for if an autonomous system is to be considered ‘safe’ to be deployed in a society of autonomous humans.
It could be argued that such uses of the term ‘autonomy’ have limited utility and applicability. Instead we could embrace multiplicity, autonomies rather than autonomy. Or re-situate autonomy in social and interactional contexts and ask how different autonomies relate to one another. Furthermore, what does safety mean? An effort to limit physical, mental, and emotional harm? What other dimensions of safe, and safety, must be addressed?
Accepted papers:
Session 1 Thursday 18 July, 2024, -Paper short abstract:
As the use of LLMs enters therapeutic spaces how can we foster and protect patient autonomy and mental health conditions? To what extent can therapeutic functions be performed by LLMs to provide scalable therapeutic care without replacing and undermining the need for human agency?
Paper long abstract:
With the rise in mental health needs and insufficient resources to curtail them, artificial intelligence (AI) technologies arise as an optimistic techno-solution to bridge this gap. Despite Large Language Models (LLMs) such as ChatGPT being criticized for its lack of accuracy and analysis, it has been able to provide scalable remote services to users worldwide. The co-founder of ‘koko’, an online platform that provides peer-to-peer mental health support, believed that if the platform utilized LLMs like ChatGPT-3, it could increase peer support that would benefit users seeking help. However, this decision was met with public concerns on how such an integration would affect users’ help-seeking behaviours and ability to foster their agency when presented with therapeutic support from an AI instead of their peers.
These concerns are echoed due to the emergence of LLMs in therapeutic care which comes at the expense of having physical spaces change to digital ones, providing more automated care, and shifting decision-making tasks that once solely rested on humans to AI. In psychological spaces that require building therapeutic rapport, trust and relational autonomy reliant on empathy, can the proliferation of LLMs be deployed safely without the deterioration of patient well-being? Although empathy is required for most therapeutic tasks, some skills-based approaches can be allocated to an AI in which scalability provided by LLMs could be beneficial. This talk will explore and examine the therapeutic interventions that utilize LLMs and the precarious balance that must be struck between the scalability of care and protecting patient autonomy.
Paper short abstract:
This presentation examines how algorithmic systems undermine human autonomy and subtly reshaping decision-making. It advocates for regulatory approaches that transcend content moderation, focusing on the coupled genesis of human mind and algorithmic techniques.
Paper long abstract:
This presentation analyzes the relationship between human autonomy and the governance structures embedded within pervasive algorithmic systems. Increasing reliance on these systems fosters cognitive and moral dependencies, subtly reshaping decision-making processes. Algorithmic nudging, through recommendations and tailored content, exerts a homogenizing influence that standardizes preferences and limits cognitive horizons under the guise of personalized choice. Citing Bovens (2009), it is highlighted that dependency on external nudges for making decisions can obstruct the development of cognitive and moral autonomy, resulting in infantilisation and reduced personal accountability.
Drawing on Gilbert Simondon's philosophy, the presentation critiques the emphasis of current regulatory frameworks on content moderation as insufficient. It advocates for a holistic and processual approach, investigating the technicities, technical lineages behind algorithmic systems to expose how they curate and present information, influencing systemic thought patterns and decision-making. This analysis would unveil the unthought consequences and reveal the coupled genesis of human mind and algorithmic techniques. Understanding this interplay will allow for new regulatory structures that balance technological innovation with the safeguarding of cognitive autonomy. This proactive approach seeks to maintain individuals' capacity for discerning judgment and ethical decision-making amidst the rapid advancement of algorithmic systems.
Concluding with a provocative question, the presentation invites reflection on the mutual alignment between AI systems and human principles. It questions whether it is solely humans aligning AI to their principles or if, conversely, AI is subtly nudging humans towards its optimal operational paradigms, thereby reshaping our thought and value formation.
Paper short abstract:
Assessing risks in autonomous systems involves safety assurance approaches focusing on technical aspects. This study applies the SOTEC framework to identify sociotechnical risks in autonomous robot swarm development, offering nuanced and contextual insights beyond technical aspects.
Paper long abstract:
Assessing the risks of autonomous systems involves the use of safety assurance approaches to analyse system-level safety requirements, such as robustness, fault tolerance, and runtime monitoring (Hawkins et al., 2022). These safety assurance approaches provide safety engineers with a framework to construct their own assurance arguments in order to demonstrate confidence in the safety of their system, particularly to regulatory bodies. So far, safety assurance cases have been related to ‘technical’ practices. However, a continuous focus on technical solutions for the systematic identification, assessment, and mitigation of potential hazards or risks (e.g., Failure Modes and Effects Analysis) has neglected the more complex human and contextual factors that underpin safety. This presentation will discuss the ethnographic findings of autonomous robot swarm development and shed light on the complex human, social, and organisational sources of risk and how they can be systematically identified to help build holistic and nuanced insights that complement the more technical aspects of assurance cases. The study applies Macrae’s (2023) Structural, Organisational, Technological, Epistemic, and Cultural (‘SOTEC’) framework to identify and understand sociotechnical sources of risk in the development, deployment, and operation of an autonomous robot swarm in a public cloakroom. Applying ideas from SOTEC and concepts of autonomy in robotics on the one hand, and in humans, on the other hand, this presentation will discuss how the technology poses unique definitional questions around ‘how safe’ the system should be and whether ‘autonomous’ is a useful term for describing such systems.
Paper short abstract:
Semi-automated cranes aim to enhance precision and safety of construction work. Our paper examines various assumptions about safety from the perspective of technology developers and crane operators, shedding light on the "new" control constellations of cyber-physical construction sites.
Paper long abstract:
Crane operations on construction sites are regarded as demanding precision work where accidents often result in property damage and personal injury. In accident investigations, there is a tendency to attribute human error as the primary cause. Recently, assistance systems for semi-automated cranes promise to support crane-operator interaction to enhance precision, efficiency and safety.
It questions how automation can contribute to safety in complex, sociotechnical construction sites (Lingard et al. 2012). Previous research has shown that safety is a necessary but insufficient condition for preventing accidents in human-technology interactions (Nordqvist/Lindblom 2018). There is an ongoing debate surrounding the trade-off between safety issues and decision-making autonomy, particularly in the context of uncertain work processes (Grote 2020.) What are the arrangements for construction site safety, and how is decision-making distributed among heterogenous entities?
This paper presents different understandings of safety within cyber-physical construction sites and the prerequisites for responsible interaction between crane operators, semi-automated cranes, and construction sites. We investigate the concept of 'hybrid control' within human-technology relations and the ways the autonomy of operators is co-constructed within organizational work situations (Suchman 1998; Grote 2018; Kropp 2021).
Based on interviews with crane operators and developers of crane assistance systems, the paper identifies a conflict of objectives between technically introduced safety measures and the challenging task of ensuring safety under the complex conditions of construction sites and their sociotechnical organization (Latour 1994). The importance of workarounds and the underestimation of relational control constellations are emphasized.