Log in to star items.
- Convenors:
-
Paul Trauttmansdorff
(Technical University of Munich)
Sofie Kronberger (University of Vienna)
Send message to Convenors
- Format:
- Traditional Open Panel
Short Abstract
This panel examines the convergence of biomedicine and surveillance through existing and newly emerging biomedical technologies, particularly biometrics. We examine the sociotechnical impacts of these technologies on medical values and norms, infrastructures and experiences of health and well-being.
Description
Driven by the continued datafication and automation, biometric data have become central to advancements in personalized medicine and healthcare. Yet, biometrics embody and expand the epistemologies and techniques of surveillance and security regimes. This panel explores the renewed nexus between medicine and surveillance, reconfigured by advances in biometrics and AI. Biometric surveillance has been integral to both administrative and clinical procedures, from immigration health screenings to regulating access to care. Emerging biometric technologies—such as skin screening, facial recognition, emotion detection, or voice analysis—are blurring the boundaries between diagnostics, identification, prediction, and surveillance. They are reshaping the norms and values through which bodies are diagnosed and made in/eligible, and how populations are governed and controlled.
Although biometric medicine has far-reaching consequences, it has remained relatively little-discussed. It promises to account for multiple understandings of health, potentially reconfiguring traditional domains of authority and power. Its techniques transform the frameworks through which health risks are produced and monitored, treatment eligibility is determined, or medical interventions are conceived. New medical capital and data assets for institutions and commercial actors are being created. At the same time, deploying biometric techniques often deepens epistemic, economic, and historical injustices and structures of discrimination and oppression.
We propose renewing a critical focus on the entanglements of medicine and surveillance. We welcome contributions with methodological and analytical perspectives on biometrics/biodata in medicine and health, and how these domains are reconfigured through datafication and algorithmic processing.
Contributions may address a range of themes including
- sociotechnical visions and promises underpinning medical biometrics;
- historical and colonial legacies shaping biometric assemblages and the construction of (non-)humanity and able-bodiedness;
- how individuals are enrolled in, adapt to, or resist emerging forms of health surveillance;
- issues around privacy, autonomy, and consent in medical biometrics;
- reinforcing or challenging inequalities, discrimination, or bias, and the shifting responsibilities and accountabilities.
Accepted papers
Session 1Paper long abstract
The arrival of big genome data stimulated a fascination with genomic predictions. DNA phenotyping - the prediction of observable traits from the genome, Genome Wide Association Studies and Polygenic (Risk) Scores define research agendas in biology and medicine, but also foster links to psychology and sociology. The emerging field of behavioural genetics and its uptake in criminology, I argue, is one such link that adds new qualities to sociological analyses of Nikolas Rose’s ‘somatic society’ or Ian Hacking’s ‘human kinds’. In extension of these and critical studies on phenotyping I offer an analysis of what I term ‘dis/ordering bodies’.
Biosocial Criminology has long argued that there is a bodily substance to crime - an argument that has been shaped by the discipline’s relationship to pathology. Big genome data offers means to think that relationship anew. Situated at the intersection of psychology, medicine and criminology the ‘antisocial phenotype’ emerges as an analytic focus. It connects genetic analyses of mental disorders to issues of crime and disorderly behaviour. Based on a systematic literature study of over 700 research papers and expert interviews I trace how this relationship is crafted, where the relationship between order and disorder is articulated in several ways. The aesthetic rationalism of big genome data methods is one such articulation of order: computation establishes analytic distance from the messy bodies and their messy behaviour. At the same time, it also assumes the scientific authority to re-inscribe orderliness and disorderliness into bodies at the molecular level.
Paper short abstract
This paper explores emerging facial recognition applications in biomedicine to trace the making of contemporary imaginaries of medical faces, together with their associated hopes and promises for producing new diagnostic knowledge.
Paper long abstract
While facial recognition technologies have increasingly become framed as revolutionary tools for unlocking new forms of diagnostic knowledge, their assumptions and promises are rooted in historical contexts. Drawing on Fleck’s notion of pre-scientific ideas the paper traces how longstanding myths and beliefs around the face become newly embedded into medicine and healthcare contexts through surveillance and machine learning application. Empirically, the analysis focuses on recent examples of machine-learning algorithms that are developed, trained, and tested to identify facial traits and automatically match patient faces to cases of genetic disorder. The paper traces how these pre-scientific concepts and their associated medical claims have promoted the face as a privileged source of reading and classifying physical and mental conditions of a individual subject. It further explores how these ideas do not remain static when integrated into contemporary scientific frameworks. Instead, the study shows how algorithmic processes, through the automated coding and analysis of medical faces, produce new meanings and practices within diagnostic and healthcare settings. The paper thereby argues that novel imaginaries of the medical face are designed to promote emerging biomedical practices around facial recognition, narrating facial characteristics as seemingly objective, biological markers of disease. By analyzing medical faces, I furthermore illustrate the blurring of boundaries between diagnostics, identification, prediction, and surveillance medicine. These shifts reshape the norms and values through which bodies are diagnosed and made legible, health risks are produced and monitored, treatment eligibility is determined, and medical interventions are conceived.
Paper short abstract
Facial AI in biomedicine and healthcare raises ethical and social questions that are framed differently across disciplines. This paper traces controversies over harm, authority and facial proxies, and how ethical repertoires define what counts as ‘good’ care.
Paper long abstract
As AI-based facial technologies are explored for diagnosis, monitoring, triage and identification across biomedicine and healthcare, faces are enacted as a biometric–clinical interface through which clinical judgement, data infrastructures and surveillance logics co-produce one another. Drawing on a mapped corpus of academic publications (2016–2025) addressing the social and ethical implications of facial AI in these settings, this paper reads the literature as a sociotechnical arena of problematisation in which harms, subjects, responsibilities and anticipatory futures are performed.
We show that “ethics” is not a single register but a set of heterogeneous enactments that privilege particular forms of evidence and specify how concerns should be operationalised and audited. Across much of the corpus, ethics is translated into compliance-ready framings (e.g., safety/clinical risk, privacy/data governance, autonomy/consent), while justice-oriented concerns (distributional, epistemic and structural), stigma and appearance norms, dehumanisation, and patient/public perspectives persist as thinner attachments. We conceptualise these patterned differences as ethical repertoires: competing regimes of worth for what counts as “good” care.
Tracing where repertoires intersect and collide, we identify three recurring controversies: (1) what counts as harm (measurable error and risk versus social meaning and lived experience); (2) who is authorised to perform ethical labour in decisions to deploy, pause or withdraw; and (3) facial reductionism, whereby features or expressions are stabilised as proxies for inner states, risk or worth. We argue that these controversies turn ethics into governance devices: boundary objects that delimit what becomes measurable and actionable within evidentiary infrastructures as facial AI is made implementable.
Paper short abstract
This paper engages with the biopolitics of disability to examine how health is articulated and communicated as an ever-expanding field of intervention through museum narratives and fictional scenarios in 'Museums of the Future'.
Paper long abstract
This paper presents a comparative ethnography of two ‘Museums of the Future’ in Germany that exhibit speculative scenarios of the future of health through gamified interactives, sensor-wearables, and immersive activities. The museums project a world facing crises, exhibiting environmental and health risks and their corresponding biomedical 'possibilities'. The museums’ interventions focus on the behaviours and bodies of the public, who face repeated injunctions to surveil, fix, and augment their bodies through increasingly invasive genomic and predictive medicines. I engage with the biopolitics of disability to examine how health is articulated and communicated as an ever-expanding field of intervention through museum narratives and fictional scenarios. I examine how sociotechnical visions and promises underpinning these datafied technologies are made legible through the first-person, gamified museum exhibits. That is, how abstract ideas of technology and society are materialised through objects and experiences and how people make sense and contest those ideas through interaction with artefacts. This is read in light of the institutionally specific form of the museum, its relationship to state and society, and the production of collective beliefs and individual subjectivities.
Keywords: datafication, surveillence, debility, cultural studies
Paper short abstract
This paper shows how the procedures, logics and values of biometric surveillance are now expressed in everyday discourses and habits of digital well-being, and will critically explore the political implications of this arrangement.
Paper long abstract
Against the well-publicised psychological risks of technological over-use, practices of datafied self-measurement are frequently presented as the key to the digital good life. In this work-in-progress paper, the habits these practices encompass - e.g. measuring screen time, quantifying social interactions on platforms, and periods of digital disconnection - will all be presented as instances of self-surveillance that extend medicalised procedures of biometric surveillance into the everyday. Specifically, engaging with our ongoing findings from the UKRI AHRC funded project ‘Control Shift Escape: New Possibilities for Digital Well-being’, we will draw attention to what Seb Franklin (2015) calls the logics of control made visible through discourses and designs of the encoded body. We will argue that attempts to measure and condition human life through regimes of biometric data surveillance render its complexities “legible through processes of capture, digitization, modelling, and prediction” (Franklin, 2015: 43), in a way characteristic of the cybernetic ‘closed’ socio-technological systems examined by Paul Edwards (1996). Our paper’s original contribution will show how these logics have now extended to everyday domains of technological self-control, whereby the individual is enjoined to manage their digital well-being through disciplined engagements with technology. Because of this, our paper will show how the ideas and practices of the encoded body now function as part of the contemporary valences and habits of a digital life well lived. By establishing who is responsible for its fruition, moreover, such a nexus contains significant normative, political implications that we will fully explore in the final paper.
Paper short abstract
This presentation examines ethical and societal issues, focusing on uncertainties of epigenetic testing for predicting suicide, depression, and high-stress environments among children and youth in Japanese case. We will discuss the risk of stigmatization, responsibility, and governance.
Paper long abstract
Recent advances in epigenetic research have raised expectations that biological markers may help identify individuals exposed to chronic stress or at risk for mental health problems. In particular, epigenetic testing has been discussed as a potential tool for detecting the risk of suicide, depression, or high-stress environments among children and adolescents. While such approaches may contribute to early intervention and preventive support, they also raise significant ethical and societal concerns.
This presentation examines the ethical and societal issues associated with the emerging use of epigenetic testing for mental health risk assessment in children and youth. Particular attention has been paid to the uncertainties surrounding the scientific interpretation and predictive validity of epigenetic markers. Because epigenetic changes are influenced by complex interactions among biological, environmental, and social factors, the translation of such data into individual-level risk prediction remains highly uncertain.
The presentation further discusses the potential risks of stigmatization and labeling when biological indicators are used to identify vulnerable children. The use of epigenetic information may inadvertently reinforce deterministic interpretations of mental health risk or lead to discrimination in educational, welfare, or family contexts. In addition, questions arise regarding responsibility and governance: who should manage and interpret such sensitive data; how consent should be obtained in the case of minors; and how the benefits and risks of testing should be balanced.
By examining these ethical and societal challenges, this study aims to contribute to broader discussions on the responsible governance of emerging epigenetic technologies in mental health and child welfare contexts.
Paper short abstract
Drawing on Stuart Hall, this paper examines the integration of biometrics in the British NHS as a shift from clinical bureaucracy to 'algocracy', excluding marginalised communities from healthcare and weaponising patient medical 'illegibility' and clinical 'ineligibility' for immigration control.
Paper long abstract
The integration of biometric technologies and centralized data infrastructures into the UK’s National Health Service (NHS) is accelerating a dangerous shift from clinical bureaucracy to medical "algocracy." Drawing on Stuart Hall’s method of conjunctural analysis, this paper argues that the deployment of medical surveillance cannot be understood as an isolated technological progression; rather, it represents a specific historical conjuncture where surveillance capitalism, the privatization of public health, and the authoritarian populism of the "Hostile Environment" violently intersect. Examining the integration of Palantir’s Federated Data Platform (FDP) alongside case studies of automated triage and biometric sensors, this study demonstrates how healthcare spaces are articulated as sites of automated border enforcement. Marginalized, racialized, and undocumented communities are systematically excluded through dual mechanisms: clinical illegibility, where biased biometric hardware and NLP algorithms fail to recognize non-normative bodies; and care ineligibility, where automated data-sharing pipelines weaponize patient information for immigration enforcement. By synthesizing recent epidemiological data and grassroots evidence from Medact’s "Patients Not Passports" campaign, this paper illustrates how the "algocratic conjuncture" removes human accountability from medical access. Ultimately, this research argues that treating patient data as a punitive asset destroys the trust-based infrastructure of the NHS, utilizing algorithmic fear to deter vulnerable populations and actively degrading national epidemiological security.
Paper short abstract
This paper traces the genealogy of psychopathological voice analysis from Eberhard Zwirner’s Frequenzschreiber (1928) and Paul Moses’ Voice of Neurosis (1954) to contemporary AI vocal biomarker systems, asking which historical constructions of deviant vocality are encoded in today’s algorithms.
Paper long abstract
Contemporary voice AI systems that claim to detect depression, bipolarity, or neurosis from speech acoustics (Semel 2022; Low et al. 2020; Fagherazzi et al. 2021; Turow 2021) are routinely critiqued for encoding cultural, gendered, and ableist norms as universal biomarkers (Ma, Patitsas and Sterne 2023). What such critiques rarely pursue, however, is the historical depth of the epistemological formations they contest — the genealogy of techniques, measurement apparatuses, and normative assumptions through which voice first became a readable biometric index of psychic deviance.
This paper traces two foundational moments in that genealogy. First, Eberhard Zwirner's phonometric apparatus at the Kaiser-Wilhelm-Institut für Hirnforschung (Berlin-Buch, 1928–c. 1940), where the Frequenzschreiber translated the fleeting voice into quantifiable curves, operationalizing claims that speech rhythm and pitch encode psychiatric states — and, in the National Socialist context, racial-hygienic typologies. Second, Paul J. Moses' The Voice of Neurosis (1954), in which acousmatic analysis of recorded samples enabled diagnosis without clinical encounter, constituting voice as an autonomous biometric index of neurotic constitution and "androgynous" deviance.
Together, these cases illuminate how specific measurement technologies, institutional contexts, and normative assumptions about deviant embodiment were stabilized into diagnostic categories that persist — rearticulated in digital feature sets such as GeMAPS (Eyben et al. 2016) — in current clinical and commercial voice AI. Situating these histories within the panel's concern for historical legacies shaping biometric assemblages, the paper asks: which constructions of pathological vocality are encoded in today's algorithmic infrastructure, and with what political effects?
Paper short abstract
This paper examines remote biometric monitoring in functional neurological disorder (FND) research. It argues that wearable sensors and digital self-reports transform patients’ everyday lives into continuous data streams, extending neurological observation and experimentation beyond the laboratory.
Paper long abstract
Functional neurological disorder (FND) is the current medical designation for heterogeneous symptoms — including seizures, paralyses, and sensory disturbances — historically labelled hysteria and often assumed to have disappeared. Now recognised as a common yet still vaguely understood conditions FND has become the focus of intensifying medical research since the mid-1990s. Much of the research into this still-contested disorder relies on laboratory-based neuroimaging studies seeking to identify symptoms’ underlying neural mechanisms, which remain elusive.
Recently, a new FND research strand has begun to supplement laboratory experiments with remote monitoring technologies to track and quantify patients’ symptoms in everyday life. Using wearable sensors and commercial devices such as Fitbit, these studies aim to identify FND’s psychobiological correlates in real-world contexts. Such studies continuously record patients’ physiological parameters (heart rate, physical activity, sleep patterns) and repeatedly assess their affective states through digital self-reports. Participants’ adherence to monitoring protocols is also tracked and evaluated.
Based on a close reading of a recent study, this paper examines how such studies extend neurological experimentation beyond the laboratory into patients’ homes and daily routines. I argue that these protocols configure patients, wearable sensors, commercial data infrastructures, and self-report interfaces into biometric research infrastructures that transform everyday life into continuous streams of medical data. Central to these studies is a problematic imaginary that sufficiently dense streams of heterogeneous data will make FND legible. By translating sensations, emotions, and daily events into biometric measures, these protocols expand neurological observation and medical surveillance into the domain of everyday life.
Paper short abstract
This paper examines how military biometrics reformulates medical capitals for warfare logics, shifting biometric questions from identification and health toward bodily eligibility for operations.
Paper long abstract
In a recent monograph, Andrew Bickford notes the following: "Control the body and you control the future" (2021: 19). In his view, US military needs to promote sustained investment in biotechnology and military medicine aimed at producing stronger, more lethal, and better-protected soldiers for the global battlefields of the twenty-first century. Among these initiatives, biometric projects hold a central role. Whereas in the medical field per se biometrics' main aim is either to help functions of the body get better and maintain health, in military medicine such technologies may blur boundaries and open the way to other perspectives. Although, as noted by Shoshana Magnet some time ago, "biometric scientists imagine these technologies as more highly evolved, efficient, and accurate versions of older techniques of identification" (2011: 19), what is at stake between the medical and the military invites us to rethink that whoever identifies the body may engineer what is yet to come. How medically intimate we are may therefore serve specific purposes for warfare operations. The aim of this presentation is to show how warfare logics reformulate medical capitals for the purpose of being mission-ready (Thormann, Bize & Hajak 2024) moving beyond such questions as "Who are you?" and "How are you?" that commonly relate to biometrics. It shall question how biometrics shapes what kinds of bodies are in/eligible for operations while using healthcare terrains.
Paper short abstract
This paper examines selected 19th-century practices such as phrenology, craniometry, and electrical study of emotion to interrogate assumptions about surface and depth, fixity and mutability, in 21st-century biometric technologies using facial recognition.
Paper long abstract
The human head—as living head, desiccated skull, or expressive face—was a privileged bodily site for past medical and anthropological sciences as they attempted to categorise and predict individuals’ heredity, criminal inclinations, or medico-nervous susceptibility. Phrenology, craniometry, physiognomy, and electrical study of emotions are some notable examples from the nineteenth century. Many of their ambitions persist in today’s emerging biometric technologies, as does the primacy of (a part of) the human head, as visually captured and analysed through automated processes facial recognition. But when a ‘face’ is photographed and evaluated using AI, is it simply the visible surface of the face that is taken to encode relevant bodily markers? What might that mean for attaining precision or stability in measurement and classification? This paper teases apart some ways issues of surface and depth, fixity and mutability were configured in historical practices of medicine or surveillance, notably 19th-century anthropology, physiognomy, and physiology. For instance, where the ‘criminal man’ or electrically and hypnotically induced emotions were intended to be perceptible at a glance, from the surface of the face, physical anthropologists like Paul Broca insisted that only skulls provided sufficient stability, only painstaking measurement protocols sufficient precision, to identify hereditary anomalies. Ultimately, the paper suggests differences between these traditions remain important for understanding assumptions, values, and norms inscribed into 21st-century facial biometrics.
Paper short abstract
This paper examines Canada’s failed use of DTC genetic genealogy testing to facilitate the deportation of detained migrants, arguing that privacy-focused critiques obscure the scientific limits of genetic identity claims, thus enabling the expansion of other forms of biometric surveillance.
Paper long abstract
In 2023, the Privacy Commissioner ruled that the Canadian Border Services Agency (CBSA) had violated the Privacy Act by submitting DNA samples from detained migrants to Direct-to-Consumer (DTC) genetic platforms to establish identity and facilitate deportation. Although the initiative was deemed unlawful, regulators later framed the case as a governance “lesson” on how biometric data might be processed more compliantly in the future.
This paper examines the failed use of DTC genetic testing as a biometric identification technology within migration governance. It argues that failures do not necessarily undermine contested identification technologies, but rather, can legitimize future policy interventions. Drawing on the concept of strategic ignorance (McGoey 2015), I analyze how forms of ignorance are leveraged in the production and stabilization of digital identity regimes.
DTC genetic genealogy platforms transform biological samples into classifications of ancestry and race, promising ‘objective’, data-driven insights about identity and belonging. Within migration governance, this evidence has been mobilized as substitutes for documentation or testimony, extending forms of biometric surveillance embedded in immigration processes, despite the fact that geneticists have repeatedly challenged the validity of inferring nationality or identity from genetic data.
Drawing on strategic ignorance, I argue that the failed use of genetic genealogy testing by the CBSA exposes the political and epistemic fragility of ‘authoritative’ and ‘scientific’ classifications of identity central to migration management. Further, critiques of DTC genetic testing serve to legitimize it’s use by centering issues of privacy, thus obscuring the fundamental inadequacy of genetic genealogy as a proxy for identity.
Paper short abstract
In this paper, we map spatio-temporal care arrangements where AI systems use voice biomarkers to detect health risks. We show how always-on machine-listening infrastructures reshape triage and eligibility via digital front doors and intensify bias and accountability tensions.
Paper long abstract
The future of health and social care is increasingly framed as a sociopolitical and economic urgency driven by demographic ageing, multimorbidity, and workforce shortages. In response, an emerging set of communicative AI (ComAI) innovations operationalises voice as a biomarker, promising predictive, pre-emptive, and preventive health through "always-on" machine listening.
We examine and map ComAI systems that translate voice into actionable risk signals across care sites including health insurers, telehealth services, clinics, and care facilities. These systems blur boundaries between diagnosis, prediction, and surveillance by classifying vulnerability, assessing (in)eligibility, and organising triage and routing through "digital front doors".
Empirically, we draw on (1) a thematic analysis of website materials from 80 ComAI providers and (2) 20 semi-structured interviews with innovation managers in healthcare and social care. We map emerging spatio-temporal care arrangements and analyse the sociotechnical visions and promises underpinning voice-biomarker infrastructures. While our focus is primarily institutional, we attend to how these arrangements position patients as subjects of continuous listening with limited recourse against the classifications made about them.
Our findings show that promissory futures of "better care" through such systems depend on continuous monitoring while shifting responsibilities and accountabilities across clinicians, organisations, and vendors. We further highlight infrastructural frictions, invisibilised labour, and concerns about consent, privacy, and bias-particularly where voice analytics misread or exclude certain bodies and voices (e.g., accent, age-related change, disability).
Keywords: voice biomarkers; health surveillance; communicative AI; sociotechnical visions; anticipatory infrastructures