Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Jay Shaw
(University of Toronto)
Núria Vallès-Peris (Spanish National Research Council - CSIC)
Miquel Domènech (Universitat Autònoma de Barcelona)
S. Scott Graham (The University of Texas at Austin)
Send message to Convenors
- Format:
- Traditional Open Panel
Short Abstract:
The ethical governance of artificial intelligence (AI) and robotics has been widely discussed across disciplines. In this panel, we aim to highlight papers that explore unique dimensions of governance of AI and robotics in health care and public health that put bioethics and STS in dialogue.
Long Abstract:
The ethical governance of artificial intelligence (AI) and robotics has been widely discussed in recent years, drawing commentary from a variety of disciplines. For the purposes of this open panel, we define governance as a distributed set of policies, laws, rules, guidelines, practices and discursive strategies oriented toward managing the risks and uncertainties of automated systems in a given domain. In this way, approaches to governance of AI and robotics might encompass both short-term and long-term impacts to organizations and societies, as well as local decisions about design and deployment of technological devices and the daily practices they compel. Governance might refer to highly formalized laws and regulations, or to locally-relevant informal practices related to innovation development, evaluation, procurement and/or implementation.
The governance of automated systems, i.e. AI and robotics, in the domains of health care and public health, introduce a unique set of considerations. These are already highly regulated fields that articulate power at multiple nodes, contain particular bioethical commitments, and are embedded within the epistemic complexity of techno-scientific knowledge. Conventional approaches to governance in health care and public health adopt an instrumentalist paradigm, neglecting the potentially deeper shifts that might occur in these fields with deployments of AI and robotics. For these reasons, the governance of these technologies in the context of broader health governance approaches represents unique opportunities for contributions at the intersection of bioethics and Science & Technology Studies (STS) on the practices that constitute governance.
In this panel, we aim to highlight papers that explore unique dimensions of governance of AI and robotics in health care and public health that put bioethics and STS in dialogue. Perspectives that critique conventional understandings, illuminate entanglement of local/global dimensions, attend to the infrastructures that sustain AI and robotics in healthcare, and outline implication for ethical governance are welcome.
Accepted papers:
Session 1Kelly Joyce (Drexel University) Melanie Jeske (University of Chicago)
Short abstract:
In this paper, we identify the values that animate the United States Food & Drug Administration (USFDA)’s review of medical devices. Using an AI approved radiology algorithm as a case study, we examine the values and politics that shape regulation of medical technologies in the United States.
Long abstract:
Much of the STS scholarship and popular discourse on national regulation of pharmaceuticals and medical devices has focused on drugs. There has been little attention to how regulatory agencies review and regulate medical devices. However, since the mid 20th century, medical devices are increasingly used in routine clinical care. Medical technologies include a broad range of objects such as MRI and other imaging machines, genomic assays, surgical implants, assistive devices, and most recently algorithms. Given the rise of biomedical engineering and computer science and these fields’ investment in finding new healthcare markets, the proliferation and regulation of medical devices is important to attend to. In this paper, we identify the values that animate the United States Food & Drug Administration (USFDA)’s review of medical devices. Unlike pharmaceuticals that undergo clinical trials, medical devices are reviewed through multiple pathways that vary in their standards for approval, ranging from required in-human clinical trials, to other pathways which enable devices to make it to market without such safety and efficacy studies. As artificial intelligence (AI) tools become commonplace in a variety of settings including biomedical research and healthcare delivery, one particularly important field they have infiltrated is radiology. Radiology has the greatest number of FDA-cleared AI applications compared to other medical specialties. Using an AI approved radiology algorithm as a case study, we examine the values and politics that shape regulation of medical technologies in the United States. Our paper contributes to STS scholarship on medical devices, regulation, AI, and evidence-based medicine.
Joaquin Yrivarren (Universidad Autónoma de Barcelona) Miquel Domènech (Universitat Autònoma de Barcelona)
Short abstract:
In describing evaluative practices around 'rare cases', by developers of an AI imaging system, we point to the 'blind spot' of two governance modalities regarding software-as-medical-device. There, non-controlled moral deliberations seeking to control lab processes' variability are deployed.
Long abstract:
We propose a description of the day-to-day work of the developers of a proprietary, AI-based imaging system applied to microbiology laboratory processes. We will focus on the practices of evaluating the different. Practices where developers must decide whether or not to accept the clients' desire to apply the imaging system to 'rare' cases. Cases resulting from a special combination of culture plates used in the laboratory and microorganisms to be detected. Identifying what is different triggers hesitation and discussion concerning one facet of AI governance: the development management in the industrial context. The empirical relevance of the practices we will describe, drawing upon a seven-month ethnography in the R&D area of a company dedicated to creating robotics and AI solutions for bacteriology laboratories, lies in the fact that they are hardly traceable practices. They happen in the 'blind spot' of two governance modalities. One is integrated into the company: the design control and risk management functions. Another belongs to consensus-based, industrial governance: standards guiding the development of software as a medical device. The former derives its legitimacy from the latter. The core idea we will discuss is that, by happening on that blind spot, the evaluation of the different is a moment of uncontrolled moral and aesthetic deliberation, where, paradoxically, controlling the variability of laboratory processes becomes a sine qua non condition of the efficacy/safety of AI and of the very possibility of optimizing those processes.
Remziye Zaim (University of Toronto) Joseph Donia (University of Toronto) Jay Shaw (University of Toronto)
Short abstract:
Using a participatory approach and co-design with diverse actors in health, Artificial Intelligence (AI), bioethics, and the community, this research aims to develop a framework to guide AI governance in health systems and responsibly deploy diabetes prevention and prediction models in Canada.
Long abstract:
Responsible Artificial Intelligence (AI) can be understood as “being responsible for the power that AI brings.” (Dignum V, 2022). It demands the identification of actors that ought to take responsibility for developing and deploying technologies ethically. At the conceptual level, responsible innovation adds explicit ethical reflection to “values” conflicts and their resolutions in Science and Technology Studies. This is because the attribution of responsibility is an act carried out by specific actors with broad societal implications. Importantly, responsible innovation demands a unique set of considerations when applied to population health and health systems. Using a participatory approach and co-design with diverse actors in health, AI, bioethics, and the community, this research aims to develop a framework to guide AI governance in health systems and responsibly deploy diabetes prevention and prediction models in Canada. The multi-level governance of these AI-enabled technologies creates possibilities and opportunities for involving a broader range of actors to provide meaningful, values-based inputs and encourage human-centered design practices. In this context, we will examine which actors should participate in the decision-making processes and explore the ethical consequences of the distribution of responsibilities of these actors on the governance of AI-enabled diabetes models, to help achieve sustainable, high-quality care for the health systems in Canada and beyond.
Miguel Larrea Schindler (Universidad Ramón Llull-Universitat Autònoma de Barcelona) Núria Vallès-Peris (Spanish National Research Council - CSIC) Miquel Domènech (Universitat Autònoma de Barcelona) Joan Moyà-Köhler (Universitat Autònoma de Barcelona)
Short abstract:
Imaginaries of techno-science are a key to understand the forms of governance of AI and robotics. We conducted an analysis of Spanish newspapers over the last ten years that addressed AI and robotics in healthcare and studied the place that imaginaries of automated systems have in our daily life.
Long abstract:
The embedded material and semiotic network that sustain the complex regime of intelligibility of innovation in healthcare is mutating with the incorporation of Artificial Intelligence (AI) and robotics. This situation opens to new approaches to the bioethical debate that these incorporations are producing. In this framework, the imaginaries of techno-science are a key element in order to understand the current forms of governance of AI and robotics and the different strategies and development of specific innovations.
In order to grasp this space, in this work, we employed an empirical approach, conducting documentary analysis of content from Spanish newspapers over the last ten years that addressed AI and robotics in healthcare. Through thematic analysis, we identified continuities and discontinuities in how AI and robotics have been portrayed in the flow of specific technologies.
With this, the work allows for a discussion on the role of these imaginaries in the negotiation and assessment of the risks and uncertainties of automated systems and the specific actions taken to address them in formalised protocols and in our daily life.
Accordingly, the work participates in achieving a deeper understanding of the complex interplay between science, technology, media, ethics, and society. Ethical governance of AI and robotics implies the engagement and consideration of a wide, continuously mutating public that manages and negotiates different evolving imaginaries, in which the media play a fundamental role.
Christian Herzog (University of Lübeck)
Long abstract:
Initial achievements and predicted progress in health-related decision support systems have given rise to quite general claims of an impending epistemic obligation for their utilization. Most of these claims result from indications that AI can reduce diagnostic errors and improve health outcomes.
Yet, works on the ethics and epistemology of explainable artificial intelligence (AI) have begun to contest such an obligation, arguing that AI’s potential epistemic opacity infringes on professional responsibility and obstructs shared decision-making—impairing health outcomes in effect. However, a recent health technology assessment report suggests that despite contributing to patient autonomy, shared decision-making apparently yields no statistically significant effects on, e.g., morbidity outcomes. Consequently, attempts to increase patient autonomy via explainable AI appear of secondary importance. However, by adopting an epistemic injustice perspective and inspired by feminist bioethics, we identify a possible obligation for medical research and technology development to epistemically include patient perspectives into designs, taking patients seriously as knowers already at an innovation’s inception.
The conflicting paradigms of maximizing health outcomes versus supporting epistemic justice give rise to at least two different approaches to medical AI: (i) Designing decision support systems with explainable and evaluative interfaces that allow for contesting an AI output during shared decision-making post-hoc, or (ii) attempting to epistemically include patient perspectives within development and the entire life-cycle of the medical AI systems.
Within this context, we ask how ethics and STS scholarship can investigate epistemic inclusion in medical AI and discuss its implications for healthcare governance.
Melanie Goisauf (BBMRI-ERIC)
Long abstract:
Artificial intelligence (AI) is transforming medical knowledge production, healthcare practices, and infrastructures. In response, various guidelines have been formulated to ensure that fundamental ethical principles, including transparency, fairness, accountability, and equity, are upheld throughout the development and application of medical AI. These principles are critical for safeguarding patient well-being, mitigating bias, and maintaining professional standards. Current discourses are focusing on the trustworthiness of AI and the risks associated with the adoption of AI technologies within healthcare settings. This paper presents the findings of an empirical analysis focused on unravelling the ethical and societal implications of AI in the domain of oncologic imaging. Within the realm of radiology, the advancement of AI is driven by imaginaries of enhancing diagnostic performance by increasing accuracy and simplifying expert’s decision-making, as it has been demonstrated to "outperform" humans. At the same time, concerns have been raised that AI in healthcare could amplify ethical and societal injustices. By delving into discursive conceptual ambiguities (such as explainability, interpretability, and transparency) and potential biases (particularly those related to sex and gender dimensions), employing an interdisciplinary and embedded ethics approach, and highlighting the significance of considering situated practices and stakeholder engagement as intersections where AI and ethics are entangled, the paper explores socio-technical conditions and knowledge production practices. Specifically, it reflects on algorithmic fairness, decision-making processes, and human oversight as essential components of ethical governance for medical AI.
Giulia De Togni (University of Edinburgh)
Short abstract:
This presentation explores care robots' impact on caregiving through focusing on HRI research. It discusses integration challenges in diverse cultural contexts, drawing from a 14-month study in Japan and the UK. It emphasizes user perspectives and advocates for inclusive tech design.
Long abstract:
This talk examines the potential transformative effects of Socially Assistive Robots (SARs) on caregiving practices, with a focus on the crucial role of Human-Robot Interaction (HRI) research. It delves into the hurdles and ethical dilemmas surrounding the integration of SARs into daily life across different cultural contexts. Drawing from qualitative analysis of data gathered during a 14-month ethnographic study conducted in Japan and the UK from 2022 to 2023, the presentation includes observations in robotics labs and assisted living facilities, as well as qualitative interviews with 80 participants, including roboticists, caregivers, and care recipients. It highlights the importance of understanding and adapting to different interpretations of "quality care" in various cultural settings and examines how SARs could contribute to delivering such care. However, it also raises concerns about the potential risks and ethical challenges associated with these technologies, such as explainability, accessibility, safety, dignity, and privacy. Finally, it addresses issues that arise when end-users' perspectives (caregivers and care recipients) regarding the functionalities of SARs are not adequately considered. The talk introduces the concept of "lay experts" to acknowledge users' unique and valuable insights, often overlooked by roboticists. Ultimately, it advocates for integrating user perspectives into the design and implementation of the technology, emphasizing the need for more inclusive and equitable innovation approaches.
Jacob Moses (University of Texas Medical Branch)
Long abstract:
This paper examines how robotic surgery for prostate cancer is epistemically and ethically evaluated in the governance of healthcare technologies. The prostate is deeply imbued with notions of gender, sexuality, and race (Johnson, 2021; Wailoo, 2012), and the surgical suite is an affectively charged space (Prentice, 2012). Prostate cancer care has undergone significant sociotechnical changes in the twenty-first century. In contemporary high-technology healthcare settings, robotic surgery has been one response to expanding biomedical markets for prostate intervention. Affective metrics, especially the presence or absence of patient regret, are mobilized in the post-operative evaluation of robotic prostatectomies to compare outcomes of operations performed by surgeons unaided by robotics and the algorithms that run these human-machine hybrid systems. Rates of regret are used as proxies to establish patient satisfaction in consumer-driven models of healthcare efficacy. The paper argues that this is further structured by bioethical notions of responsibility that place the greatest emphasis on logics of choice (Mol, 2008) and considers how responsibility is configured when algorithms are driving surgical intervention.
Rosanna Ramirez Nethersole (Universitat Autonoma de Barcelona) Miquel Domènech (Universitat Autònoma de Barcelona) Núria Vallès-Peris (Spanish National Research Council - CSIC)
Long abstract:
Robots for aged care are considered a beacon of hope to the growing imbalance between the demographic rise of older adults needing care and the strain on healthcare services to provide integrated assistance. Within this promissory discourse, claims regarding robotic autonomy have become increasingly important in scientific research and policy agendas (Lipp, 2022). Autonomy, however, far from an inherent robotic attribute; is endeavoured, negotiated, and only sometimes achieved within particular human-machine configurations. Furthermore, successful robotic autonomous behaviour implies efforts in establishing a level of human-machine collaboration, where responsibilities and decision-making authority are distributed between humans and machines (Mindell, 2015).
Within this context, this paper explores the assemblages that configure a care robot's autonomy during the process of robot testing in an aged care nursing home. Specifically, we aim to address: what phenomena participates in the assemblages that configure (glimpses of) robot autonomy?
Employing a combination of qualitative methodology – participant observation followed by semi-structured and open-ended interviews with residents – we provide a thick description of the intricate dynamics and struggles encountered in constructing, repairing, and maintaining instances of robotic autonomous behaviour. Our findings shed light on the need for a redefinition of autonomy in aged care robotics and, most importantly, how the notion of place plays a key role in the interplay of spatial and social assemblages that construct robot autonomy.