Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenor:
-
Ricky Janssen
(Maastricht University)
Send message to Convenor
- Chair:
-
Nora Engel
(Maastricht University)
- Discussant:
-
Pierre-Marie DAVID
(Université de Montréal)
- Format:
- Traditional Open Panel
- Location:
- HG-07A16
- Sessions:
- Friday 19 July, -
Time zone: Europe/Amsterdam
Short Abstract:
This panel will attend to questions of normativity, valuation and method around digital transformations of diagnosis. We welcome contributions from papers on AI and other forms of digital technology developed to diagnose disease or support diagnostic moments.
Long Abstract:
In the context of medical testing, where time and resource constraints are constant challenges, research institutions and industry actors aim to create diagnostic technologies that can act quickly, simply, or even autonomously to provide test results for various health conditions. Through the introduction of new digital diagnostic technologies, such as self-testing apps and AI-based computer aided detection systems, diagnostic practices transform. Through this transformation, diagnosis may no longer require the seeing, listening, touching or clinical judgement of a healthcare professional, but rather, relies on software algorithms attuned to specific settings for accuracy. At the same time, the diagnostic itself changes and transforms through software updates and recalibration for specific settings. These transformations require resources, which may be scarce or unavailable.
This panel will attend to, among others, questions of normativity, valuation and method around digital transformations of diagnosis. We welcome contributions from papers on AI and other forms of digital technology developed to diagnose disease or support diagnostic moments. Mobilizing STS sensibilities to understand how these technologies are shaped by, and shape, the social, material and political worlds in which they work, we invite papers related (but not limited) to questions addressing:
What should digital diagnostic care and practice entail?
What socio-material practices are involved in digital diagnoses?
What do different actors value when designing, implementing and using digital technology for diagnosis?
How is diagnostic confidence constructed and how might this be altered in the context of digital diagnostic technology?
What are these digital technologies made to diagnose?
What values are being generated through these diagnostic transformations for health care and practice?
How should digital diagnostic technology be designed, implemented and used to support equity in access and care?
How do we understand, study and “test” new testing/diagnostic technologies when the practices around them and technologies themselves are changing?
Accepted papers:
Session 1 Friday 19 July, 2024, -Paper short abstract:
This presentation traces the ways through which the promises of 'digital diagnosis' have led to specific forms of digitisation in healthcare practice. Forms that have left healthcare systems without the interconnected and easy-to-access digital records promised at the turn of the millennium.
Paper long abstract:
Digital diagnosis promises various economic, quality, and safety benefits for those engaged in the management and organisation of healthcare. To policy makers it offers resolution to workforce problems through the deskilling of clinical work. When allied with emerging and seductive promises of preventative healthcare, digital diagnosis opens a door to new screening mechanisms that further buttress claims of a future healthcare system radically more economic than those currently available. To managers concerned about the all-too-human workforce — whom are liable to deliver negligent care or malicious harm to patients — it offers a system of control over clinical work. The augmentation and fixing of decision making into pre-defined pathways of best-practice, and evidence-based decision making, is thus a heady cocktail for those challenged to find more-and-better mechanisms of oversight into the increasingly specialised fields of clinical work.
To realise these dreams, an environment must be created in which clinical work can be rendered legible to the numerical logics of computation. Such an environment is incompatible with a medical record settlement that stretches back 100 years, and trades off the expertises of the manager, the doctor, and the researcher in ways that have —for those 100 years— determined the epistemic operations of medical work. In short, I explore the ways that the promises of digital diagnosis have already, and continue to, unsettle the truth games of clinical work.
Paper short abstract:
I will present the perspectives and concerns of cardiologists, data scientists and a platform developer regarding predictive algorithms and digital twin technologies. I will show how their actions to develop these technologies and underlying values, e.g., shape patients' roles and responsibilities.
Paper long abstract:
My PhD research is embedded in the MyDigiTwin consortium, a research initiative aiming to create a digital Twin platform where citizens, including patients, can compare their health data to big data references. The ambition of the technology promotors is to improve early detection and prevention, in this case of cardiovascular disease. As the development of these digital twins is still in R&D, I aim to discover how potential new roles, and responsibilities of citizens and physicians, including notions of ‘good care’, are inscribed in technology development.
Through interviewing cardiologists, data scientists and platform developers, I explored how they envision the added value of these technologies, and how they should be implemented in healthcare. I use the notion of ‘scripts’ to analyze how the envisioned functionalities of the technology enable and constrain the patients' actions.
I found that stakeholders had divergent ideas on the role of digital twins. Many stakeholders characterized it primarily as a tool to be adopted or disseminated by cardiologists or other medical professionals. Hence, they aim to improve the interpretability and validity of the algorithms to raise clinicians' trust. Here, the responsibility for care and disseminating information to patients remains primarily with physicians. Another perspective contests this and advocates digital twins to be a tool for patients. They focus on making the scientific information and predictions by the algorithms accessible and understandable. Subsequently, responsibility for health is shifted towards the patient. I will discuss how power asymmetries within the consortium, shape the design of digital twins.
Paper short abstract:
How can (and should) researchers investigate whether a digital health intervention actually works? In this paper, we explore how a digital health approach for self-testing is evaluated and consider how current evaluation indicators make (in)visible the way an app is made to work in practice.
Paper long abstract:
In the case of digital health interventions, the “gold standard” for evaluating effectiveness is the randomized control trial (RCT). Yet, RCT methodology presents issues such as precluding changes to the technology during the study period as well as the use of study settings that do not reflect “real world” contexts. So, how can (and should) researchers investigate whether a digital health intervention works in practice? In this paper, we use insights gathered throughout our ethnographic research on a quasi-randomized trial implementing an app called HIVSmart!, which is digital strategy designed to support people in the process of HIV self-testing. We explore how digital health approaches, such as apps, are evaluated and how current evaluation indicators make (in)visible the daily realities of making an app work in practice. Our analysis reveals that digital health researchers and policy-makers who guide digital health evaluation need to reconceptualize the way we ask questions about digital health effectiveness. For example, instead of asking whether a particular digital health tool for diagnosis/screening is “easy to use”, we should instead take a much closer look at what makes the process of using the technology easy over time and the multiple, shifting users who define this ease of use. Based on our findings, we contribute to critical STS theory on the evaluation of global health interventions by putting forward alternative ways of asking questions about the effectiveness of digital health interventions.