Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Kevin Wiggert
(Technical University of Berlin)
Renate Baumgartner (VU Amsterdam)
Send message to Convenors
- Chairs:
-
Kevin Wiggert
(Technical University of Berlin)
Renate Baumgartner (VU Amsterdam)
- Discussant:
-
Anamaria Malešević
(Catholic University of Croatia)
- Format:
- Combined Format Open Panel
- Location:
- HG-06A33
- Sessions:
- Friday 19 July, -, -, -
Time zone: Europe/Amsterdam
Short Abstract:
AI is trending in medicine and healthcare. In this open panel we would like to explore how we research medical AI from an STS perspective. We will discuss what challenges and opportunities this research brings us and our research partners from other disciplines and which methods we find productive.
Long Abstract:
Artificial intelligence (AI) has become an indispensable component of contemporary society, and its utilization in healthcare is increasingly prevalent. This technology possesses the potential to transform, expedite, and enhance many facets of medical practice, ranging from patient care to administrative tasks. Numerous research efforts within the global scientific community are directed toward various aspects of artificial intelligence development, implementation, and application, as well as the challenges and risks that emerge throughout the process. As STS scholars researching about medical AI development, implementation and application we are faced with manifold issues regarding useful methods and methodologies, normativities and not the least with unique technicalities (e.g., a blackboxed inference engine) inherent to the field of medical AI.
In this open panel we will discuss how we as STS scholars and with other backgrounds are researching medical AI in order to discuss the question of how we can approach the research of medical AI methodically and methodologically.
Presentations and discussion will revolve around the following topics and beyond:
What are productive and innovative research methods and methodologies to research medical AI?
How does our background in different disciplines influence our research?
How knowledgeable do we think we have to be regarding technicalities in the field?
How do we influence the field through our research? What is our role in the field, e.g. co-shaping?
How do we deal with ethical aspects of our research?
How to deal with different normativities in the field?
Which other challenges do we encounter in our research?
How can STS communicate and demonstrate its utility/contribution to the medical AI community?
Format:
The first 1-2 sessions, depending on the submissions, will be presentations (3-4 presentations within one session). The last session will be a workshop, discussing subtopics of the theme within smaller groups and a fishbowl discussion at the end.
Accepted papers:
Session 1 Friday 19 July, 2024, -Paper short abstract:
In this presentation, I describe the methodological development and ongoing puzzles of a large research programme designed to understand and theorise the changing relations of data, care and learning at multiple scales in informatics-informed medicine.
Paper long abstract:
What kinds of work are required to produce learning in healthcare at the intersection of clinical care, research and informatics? How are data practices and care related to each other in this new configuration? And how are new forms of data and care work constituted by and constitutive of the specificities of time, place and personhood? These questions animate a five-year study of data and care practices in the era of biomedical AI, which I describe in this presentation. How to answer these questions ethnographically across multiple scales is the puzzle I address. Ethnographies of data are keenly advocated for in social studies of medical AI, but remain few and far between. Big data’s ‘mercurial’ character, resulting both from its ubiquity and polysemy, demands that we foreground its specificity, putting new instantiations of data practices into conversation with long-standing theoretical concerns.
The research takes a multiscale ethnographic approach in order to understand developments in data|care practices, including machine learning and AI, which operate across different dimensions of lived experience, from the home to the hospital to the nation state. We adopt a practice-based approach, starting from the premise that these practices are sociotechnical, situated, contingent and performative. In this presentation I delve into the challenges and opportunities this poses, considering issues such as scale, comparison, and how to capture the sensory. In so doing, I contribute to debates about how to rethink the conceptual and methodological repertoires STS uses to engage with medical AI.
Paper short abstract:
We present our ethnography of information technologists and healthcare providers as they implement a generative AI system for responding to patient messages. We will share how learning health science shapes our work and the challenges we have faced studying generative AI in the clinical space.
Paper long abstract:
Since the public release of the generative AI chatbot - ChatGPT, healthcare systems and electronic health record (EHR) vendors have jumped at the opportunity to integrate generative AI into clinical practice. At a large academic health center in the Midwestern United States, an interdisciplinary team of information technology (IT) professionals is working to roll out a system using ChatGPT to create AI-generated draft responses to patient inbox messages. The use of generative AI for assistance in administrative documentation tasks such as this is novel and rapidly evolving.
In this panel, we will present our current ethnography, exploring the emergent behaviors, interactions, and perceptions of the IT experts and healthcare providers who use, implement, and evaluate this new technology. We will discuss how the implementation, use, and evaluation of generative AI technologies are entangled processes that cannot be studied in isolation and how ethnography can help us understand the social phenomena that influence these processes. Our ethnographic methods are complemented by exploratory data analysis techniques using natural language processing. We will share how adding these techniques can support the findings of traditional ethnographic approaches when studying medical AI.
We frame our discussion and interpretative lens through the perspective of learning health sciences. Our presentation will explore how this perspective influences our current work and its contribution to STS scholarships on medical AI. Additionally, we will present the challenges we have encountered in the field as we try to negotiate our role as ethnographers with the expectations of IT professionals and providers.
Paper short abstract:
This contribution investigates ongoing innovations in pathology, particularly the integration of AI into diagnostic workflows. Utilizing ethnographic methods—interviews, observations, and filming—we reveal nuanced insights into the labor, craft and evolving ethical debates within the field.
Paper long abstract:
The daily work of pathologists, unseen and unknown to most of us, is undergoing fundamental transitions with widespread consequences. Pathology labs worldwide are switching from analogue to digital workspaces; many digitalized labs are also beginning to integrate AI into diagnostic workflows. It’s still unclear to what extent these systems will impact clinicians’ autonomy, under what circumstances these systems should be trusted, and how their use will impact the distribution of responsibilities. Our study aimed to maximize our empirical insights into these issues. At the same time, we were committed to informing a broad range of stakeholders and providing meaningful ethical guidance on AI implementation within pathology. To achieve this, we employed a multifaceted approach that combined in-depth semi-structured interviews to reveal professionals’ perspectives on the nature of pathology and AI’s possible role in their daily work, participant observations to learn more about how professionals perform their duties and apply their expertise, and an ethnographic film to capture the nuances of professionals’ daily practices. Furthermore, as a team, we approached the topics of digitalization and AI from a variety of backgrounds and and areas of expertise, including bioethics, STS, philosophy, pathology, and narratology. By describing the benefits and downsides of our holistic approach to data collection, analysis, and dissemination, we hope to inspire reflection among researchers grappling with the intricate dynamics of digital transitions in similar contexts. We also hope to contribute valuable perspectives to the ongoing discourse on the responsible use of AI in daily healthcare practices.
Paper short abstract:
This contribution seeks to reflect on how to study and make sense of potential discrepancies between envisioned uses of AI-based systems in healthcare and their more complicated realities in practice.
Paper long abstract:
This contribution builds on ongoing explorative empirical research on visions and practices of use of AI-based systems in healthcare contexts. Designed as a multi-level investigation of digitalisation efforts within the German healthcare system, this project is concerned with the contingent and at times contested ways of algorithmic and AI systems coming into being. It aims to analyse and make sense of potential discrepancies between anticipated uses and their more complicated sociotechnical realities in practice. In this context, this contribution seeks to explore how to study such discrepancies, including such connected to the ways narratives of patient-centered healthcare and user-centered approaches to technology development (fail to) unfold in practice, as well as related in/visibilities of affected social groups, while being attentive to the power dynamics within such digitalisation settings. Drawing on conceptual sensibilities of research on the fragility of sociotechnical systems (e.g., Hommels et al., 2014; Jackson, 2014) and care-ful research and generative critique in data and algorithm studies (e.g., Law & Lin 2022; Zakharova, 2022) this contribution seeks to explore how to study such tensions and social dynamics in the development and use of AI-based systems in healthcare contexts.
Paper short abstract:
In this paper, we discuss our trajectory of researching patient-led open-source health innovation in the context of T1D Diabetes without and with affected people who use health innovation products. We reflect on the importance of research methodologies that learn from embodied and lived knowledges.
Paper long abstract:
Health innovation is mainly envisioned in direct connection to medical research institutions or pharmaceutical and technology companies. Yet, these types of innovation often do not meet the needs and expectations of individuals affected by various health conditions. With the emergence of digital health technologies and social media, we can observe a shift, which involves people living with illness modifying and improving medical and health devices outside of the formal research and development sector, figuring both as users and innovators. In our previous research, we have taken a closer look at the ethics of open-source patient-led innovation in the context of type 1 diabetes care, arguing that it comes short of being a "bottom-up" kind of innovation fostering the needs of the most under-served populations. Along the journey of investigating concerns of intersectional and global health justice, we have also become increasingly more attentive to the need of researching such concerns with T1D innovation users themselves. In this paper, we share what we have learnt through subsequent dialogues and consultations with academics, innovators, carers and persons with lived experiences of T1D. In particular, we share a range of specific and crucial socio-ethical issues and health needs we would have not come to be aware of had we not engaged with affected experts during our research. Building on our self-reflective trajectory, we draw crucial lessons about the type of concerns that are missing from digital health ethics debates and research methodologies without direct engagement with and learning from embodied and lived knowledges.
Paper short abstract:
Gathering research data in unfamiliar territories such as medical AI poses challenges. Social science researchers may also need help with AI complexities. This paper emphasises Public and Patient Involvement (PPI) in medical AI research, presenting its significance in the ongoing PARADISE project.
Paper long abstract:
Artificial intelligence (AI) demonstrates exceptional potential in health and medicine. However, due to its disruptive nature, it is sometimes challenging to predefine the benefits and harms it can bring to health(care). Therefore, research aimed at understanding the benefits of AI interventions in medicine and highlighting specific societal, legal, ethical, or technical issues is crucial. Researchers also face the challenge of gathering data on a topic that is largely unfamiliar to the public, primarily if the research is focused on patients or healthcare professionals who have not yet had the opportunity to encounter AI. Social science researchers may also face many uncertainties in understanding AI due to technicalities in the field. Medical AI requires researchers to employ creative approaches, such as anticipatory ethics or arts-based research, to engage with participants. This paper will focus on PPI (Public and Patient Involvement) as a crucial aspect of medical AI research. PPI practice is common in drug and therapy testing and, when implementing AI, provides insights that are extremely useful in all stages of AI, from its design to its use. The paper will present the experience of the PPI group deeply embedded in the PARADISE project (PersonAlisation of RelApse risk in autoimmune DISEase), where involved patients are not the subjects of research but partners in research and main contributors who help to steer the development process of the AI solution, i.e. a personalised, predictive tool that accurately estimates the approximate moment of the individual’s degree of immune system activation.
Paper short abstract:
We propose a three-stage methodology to research the emerging arrangements for implementing medical AI. We examine this in an investigation of different strategies for developing, implementing and validating AI diagnostic tools.
Paper long abstract:
STS research into medical AI requires strategies to keep up with the rapid deployment of AI tools in an evolving landscape. Traditional methods, like ethnographies of developer or user organisations, struggle to capture the emergence of new actors, their strategic transformation and constantly changing relationships. To address this, we propose an adaptive evolutionary methodology involving three phases: Landscape, Vignettes, and In-depth study. We developed and refined this methodology in an investigation of different strategies for developing, implementing and validating AI diagnostic tools. This methodology seeks to balance the insights from detailed local ethnography with the need to track emerging institutional and technical arrangements through interaction between diverse players over an extended period and across multiple locales.
Landscape: initial interviews with stakeholders offer insights into their perspectives, aids in understanding the challenges and solutions in adopting medical AI, and helps identify dilemmas and emerging trends that guide further detailed studies.
Vignettes: cases selected from the Landscape to compare strategies across settings. Our medical AI study analyses two cancer detection tools to understand the interplay between AI developers, health providers and their contingent innovation ecosystems.
In-depth study charts the long-term change process in a single case to capture the complex dynamics surrounding the emergence of a key player.
The methodology draws upon the Biography of Artefacts and Practice perspective. More specific STS concepts that engage these evolving developments — including theories of information infrastructure, domestication/social learning — will inform the interpretation of empirical findings and their implications for policy and practice.
Paper short abstract:
This paper explores the application of historical methodologies in collaborative, empirical research of medical AI. Based on a case study of AI integration in Dutch academic hospitals, it assesses how historical analysis adds to the dynamics of interdisciplinary, STS-oriented projects.
Paper long abstract:
This paper focusses on the interactions between historical analysis and STS research within the realm of medical AI, using as a case study an interdisciplinary research project on the integration of AI in pathology and radiology within Dutch academic hospitals. Historians are occasionally commissioned to participate in STS-oriented empirical research projects, e.g. to provide genealogical analysis of the impact of past developments on, as well as contextual conceptualizations of the present. While significant scholarship exists on the relationship between STS and the history of science and technology (Daston, 2009), as well as on the role of history in the education of medical professionals (Jones et al., 2015), less is understood about practical collaboration between STS-scholars and historians. Central to historical methodologies is the promise of critical and conceptual insights, serving implicitly as a counterbalance to present-day AI-hype. This case study demonstrates how historical research of 1950’s chest x-ray expertise and cancer ‘screensters’ in the 1970s may supplement ethnographic work on AI and image-based medicine today. Through self-reflection on the author’s role as a collaborating historian on an STS-oriented research project, this paper asks: How can historical analysis manifest itself within the dynamics of project-based, collaborative STS research into contemporary medical AI? What implicit normativities do historians, particularly those specializing in ‘histories of AI’ (Ali et al., 2023), bring to STS research in the domain of medical AI? How can historians contribute to the innovation of research methods for the examination of medical AI?