Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Jay Shaw
(University of Toronto)
Núria Vallès-Peris (Spanish National Research Council - CSIC)
Miquel Domènech (Universitat Autònoma de Barcelona)
S. Scott Graham (The University of Texas at Austin)
Send message to Convenors
- Format:
- Traditional Open Panel
- :
- HG-02A24
- Sessions:
- Thursday 18 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
The ethical governance of artificial intelligence (AI) and robotics has been widely discussed across disciplines. In this panel, we aim to highlight papers that explore unique dimensions of governance of AI and robotics in health care and public health that put bioethics and STS in dialogue.
Long Abstract:
The ethical governance of artificial intelligence (AI) and robotics has been widely discussed in recent years, drawing commentary from a variety of disciplines. For the purposes of this open panel, we define governance as a distributed set of policies, laws, rules, guidelines, practices and discursive strategies oriented toward managing the risks and uncertainties of automated systems in a given domain. In this way, approaches to governance of AI and robotics might encompass both short-term and long-term impacts to organizations and societies, as well as local decisions about design and deployment of technological devices and the daily practices they compel. Governance might refer to highly formalized laws and regulations, or to locally-relevant informal practices related to innovation development, evaluation, procurement and/or implementation.
The governance of automated systems, i.e. AI and robotics, in the domains of health care and public health, introduce a unique set of considerations. These are already highly regulated fields that articulate power at multiple nodes, contain particular bioethical commitments, and are embedded within the epistemic complexity of techno-scientific knowledge. Conventional approaches to governance in health care and public health adopt an instrumentalist paradigm, neglecting the potentially deeper shifts that might occur in these fields with deployments of AI and robotics. For these reasons, the governance of these technologies in the context of broader health governance approaches represents unique opportunities for contributions at the intersection of bioethics and Science & Technology Studies (STS) on the practices that constitute governance.
In this panel, we aim to highlight papers that explore unique dimensions of governance of AI and robotics in health care and public health that put bioethics and STS in dialogue. Perspectives that critique conventional understandings, illuminate entanglement of local/global dimensions, attend to the infrastructures that sustain AI and robotics in healthcare, and outline implication for ethical governance are welcome.
Accepted papers:
Session 1 Thursday 18 July, 2024, -Short abstract:
In this paper, we identify the values that animate the United States Food & Drug Administration (USFDA)’s review of medical devices. Using an AI approved radiology algorithm as a case study, we examine the values and politics that shape regulation of medical technologies in the United States.
Long abstract:
Much of the STS scholarship and popular discourse on national regulation of pharmaceuticals and medical devices has focused on drugs. There has been little attention to how regulatory agencies review and regulate medical devices. However, since the mid 20th century, medical devices are increasingly used in routine clinical care. Medical technologies include a broad range of objects such as MRI and other imaging machines, genomic assays, surgical implants, assistive devices, and most recently algorithms. Given the rise of biomedical engineering and computer science and these fields’ investment in finding new healthcare markets, the proliferation and regulation of medical devices is important to attend to. In this paper, we identify the values that animate the United States Food & Drug Administration (USFDA)’s review of medical devices. Unlike pharmaceuticals that undergo clinical trials, medical devices are reviewed through multiple pathways that vary in their standards for approval, ranging from required in-human clinical trials, to other pathways which enable devices to make it to market without such safety and efficacy studies. As artificial intelligence (AI) tools become commonplace in a variety of settings including biomedical research and healthcare delivery, one particularly important field they have infiltrated is radiology. Radiology has the greatest number of FDA-cleared AI applications compared to other medical specialties. Using an AI approved radiology algorithm as a case study, we examine the values and politics that shape regulation of medical technologies in the United States. Our paper contributes to STS scholarship on medical devices, regulation, AI, and evidence-based medicine.
Short abstract:
In describing evaluative practices around 'rare cases', by developers of an AI imaging system, we point to the 'blind spot' of two governance modalities regarding software-as-medical-device. There, non-controlled moral deliberations seeking to control lab processes' variability are deployed.
Long abstract:
We propose a description of the day-to-day work of the developers of a proprietary, AI-based imaging system applied to microbiology laboratory processes. We will focus on the practices of evaluating the different. Practices where developers must decide whether or not to accept the clients' desire to apply the imaging system to 'rare' cases. Cases resulting from a special combination of culture plates used in the laboratory and microorganisms to be detected. Identifying what is different triggers hesitation and discussion concerning one facet of AI governance: the development management in the industrial context. The empirical relevance of the practices we will describe, drawing upon a seven-month ethnography in the R&D area of a company dedicated to creating robotics and AI solutions for bacteriology laboratories, lies in the fact that they are hardly traceable practices. They happen in the 'blind spot' of two governance modalities. One is integrated into the company: the design control and risk management functions. Another belongs to consensus-based, industrial governance: standards guiding the development of software as a medical device. The former derives its legitimacy from the latter. The core idea we will discuss is that, by happening on that blind spot, the evaluation of the different is a moment of uncontrolled moral and aesthetic deliberation, where, paradoxically, controlling the variability of laboratory processes becomes a sine qua non condition of the efficacy/safety of AI and of the very possibility of optimizing those processes.
Short abstract:
Using a participatory approach and co-design with diverse actors in health, Artificial Intelligence (AI), bioethics, and the community, this research aims to develop a framework to guide AI governance in health systems and responsibly deploy diabetes prevention and prediction models in Canada.
Long abstract:
Responsible Artificial Intelligence (AI) can be understood as “being responsible for the power that AI brings.” (Dignum V, 2022). It demands the identification of actors that ought to take responsibility for developing and deploying technologies ethically. At the conceptual level, responsible innovation adds explicit ethical reflection to “values” conflicts and their resolutions in Science and Technology Studies. This is because the attribution of responsibility is an act carried out by specific actors with broad societal implications. Importantly, responsible innovation demands a unique set of considerations when applied to population health and health systems. Using a participatory approach and co-design with diverse actors in health, AI, bioethics, and the community, this research aims to develop a framework to guide AI governance in health systems and responsibly deploy diabetes prevention and prediction models in Canada. The multi-level governance of these AI-enabled technologies creates possibilities and opportunities for involving a broader range of actors to provide meaningful, values-based inputs and encourage human-centered design practices. In this context, we will examine which actors should participate in the decision-making processes and explore the ethical consequences of the distribution of responsibilities of these actors on the governance of AI-enabled diabetes models, to help achieve sustainable, high-quality care for the health systems in Canada and beyond.
Short abstract:
Imaginaries of techno-science are a key to understand the forms of governance of AI and robotics. We conducted an analysis of Spanish newspapers over the last ten years that addressed AI and robotics in healthcare and studied the place that imaginaries of automated systems have in our daily life.
Long abstract:
The embedded material and semiotic network that sustain the complex regime of intelligibility of innovation in healthcare is mutating with the incorporation of Artificial Intelligence (AI) and robotics. This situation opens to new approaches to the bioethical debate that these incorporations are producing. In this framework, the imaginaries of techno-science are a key element in order to understand the current forms of governance of AI and robotics and the different strategies and development of specific innovations.
In order to grasp this space, in this work, we employed an empirical approach, conducting documentary analysis of content from Spanish newspapers over the last ten years that addressed AI and robotics in healthcare. Through thematic analysis, we identified continuities and discontinuities in how AI and robotics have been portrayed in the flow of specific technologies.
With this, the work allows for a discussion on the role of these imaginaries in the negotiation and assessment of the risks and uncertainties of automated systems and the specific actions taken to address them in formalised protocols and in our daily life.
Accordingly, the work participates in achieving a deeper understanding of the complex interplay between science, technology, media, ethics, and society. Ethical governance of AI and robotics implies the engagement and consideration of a wide, continuously mutating public that manages and negotiates different evolving imaginaries, in which the media play a fundamental role.
Short abstract:
This paper examines the entanglements between ethics and AI in oncologic imaging, focussing on the socio-technological aspects of AI's transformative impact on medicine and healthcare, and highlights the implications for ethical governance in the case of medical AI.
Long abstract:
Artificial intelligence (AI) is transforming medical knowledge production, healthcare practices, and infrastructures. In response, various guidelines have been formulated to ensure that fundamental ethical principles, including transparency, fairness, accountability, and equity, are upheld throughout the development and application of medical AI. These principles are critical for safeguarding patient well-being, mitigating bias, and maintaining professional standards. Current discourses are focusing on the trustworthiness of AI and the risks associated with the adoption of AI technologies within healthcare settings. This paper presents the findings of an empirical analysis focused on unravelling the ethical and societal implications of AI in the domain of oncologic imaging. Within the realm of radiology, the advancement of AI is driven by imaginaries of enhancing diagnostic performance by increasing accuracy and simplifying expert’s decision-making, as it has been demonstrated to "outperform" humans. At the same time, concerns have been raised that AI in healthcare could amplify ethical and societal injustices. By delving into discursive conceptual ambiguities (such as explainability, interpretability, and transparency) and potential biases (particularly those related to sex and gender dimensions), employing an interdisciplinary and embedded ethics approach, and highlighting the significance of considering situated practices and stakeholder engagement as intersections where AI and ethics are entangled, the paper explores socio-technical conditions and knowledge production practices. Specifically, it reflects on algorithmic fairness, decision-making processes, and human oversight as essential components of ethical governance for medical AI.
Short abstract:
AI/robotic healthcare technologies offer promising solutions for aging populations but raise safety and ethical concerns, prompting a reflection on ensuring these technologies enhance quality care. The talk emphasises the need for safe, useful technology developed through inclusive research methods.
Long abstract:
With aging populations and strained healthcare systems, AI and robotic healthcare technologies offer promising solutions but also raise safety and ethical concerns. This talk prompts reflection on the essence of care and the roles of different stakeholders in ensuring technology enhances what is considered "good quality care” by care recipients and caregivers. Drawing from ethnographic research she conducted in Japan and the UK for 18 months in 2022-2024, De Togni emphasises how caregiving is a fundamental human activity requiring empathy, and questions how robots may fit into this space. De Togni addresses two main issues with the introduction of these technologies in healthcare practices: 1) How to ensure the technology is safe and useful; and 2) How to integrate the perspectives of end-users in early technology development. She concludes that more inclusive participatory research methods are needed to develop safe, effective, and acceptable healthcare solutions involving the deployment of AI and robots in care practices.
Short abstract:
This paper examines the sociotechnical practice of robotic prostate surgery. It analyzes how an affective metric—"patient regret"-—is utilized to evaluate the outcomes of robotic surgery for prostate cancer and configures bioethical structures of responsibility.
Long abstract:
This paper examines how robotic surgery for prostate cancer is epistemically and ethically evaluated in the governance of healthcare technologies. The prostate is deeply imbued with notions of gender, sexuality, and race (Johnson, 2021; Wailoo, 2012), and the surgical suite is an affectively charged space (Prentice, 2012). Prostate cancer care has undergone significant sociotechnical changes in the twenty-first century. In contemporary high-technology healthcare settings, robotic surgery has been one response to expanding biomedical markets for prostate intervention. Affective metrics, especially the presence or absence of patient regret, are mobilized in the post-operative evaluation of robotic prostatectomies to compare outcomes of operations performed by surgeons unaided by robotics and the algorithms that run these human-machine hybrid systems. Rates of regret are used as proxies to establish patient satisfaction in consumer-driven models of healthcare efficacy. The paper argues that this is further structured by bioethical notions of responsibility that place the greatest emphasis on logics of choice (Mol, 2008) and considers how responsibility is configured when algorithms are driving surgical intervention.
Short abstract:
This paper explores the assemblages that configure a care robot's autonomy during the testing process in an aged care nursing home. By a combination of qualitative methodology, we provide a description of the struggles in constructing, repairing, and maintaining glimpses of robot autonomy.
Long abstract:
Robots for aged care are considered a beacon of hope to the growing imbalance between the demographic rise of older adults needing care and the strain on healthcare services to provide integrated assistance. Within this promissory discourse, claims regarding robotic autonomy have become increasingly important in scientific research and policy agendas (Lipp, 2022). Autonomy, however, far from an inherent robotic attribute; is endeavoured, negotiated, and only sometimes achieved within particular human-machine configurations. Furthermore, successful robotic autonomous behaviour implies efforts in establishing a level of human-machine collaboration, where responsibilities and decision-making authority are distributed between humans and machines (Mindell, 2015).
Within this context, this paper explores the assemblages that configure a care robot's autonomy during the process of robot testing in an aged care nursing home. Specifically, we aim to address: what phenomena participates in the assemblages that configure (glimpses of) robot autonomy?
Employing a combination of qualitative methodology – participant observation followed by semi-structured and open-ended interviews with residents – we provide a thick description of the intricate dynamics and struggles encountered in constructing, repairing, and maintaining instances of robotic autonomous behaviour. Our findings shed light on the need for a redefinition of autonomy in aged care robotics and, most importantly, how the notion of place plays a key role in the interplay of spatial and social assemblages that construct robot autonomy.