Log in to star items.
Accepted Contribution
Short abstract
Through ethnography and analysis of variational autoencoders, this paper explores how epistemic claims are produced in unsupervised learning systems that do not require ground truth. It shows how uncertainty and validation are enacted across algorithmic pipelines, rather than eliminated.
Long abstract
“Ground truth is a thing of the past”: a phrase I frequently encountered in a Japanese robotics laboratory. This statement justified a move away from supervised learning, towards approaches that do not rely on labelled data. According to my interlocutors, abandoning bias-prone processes of human annotation allows systems to become generative beyond, and more-than, human benchmarks of objectivity and validation.
This paper starts from these intuitions to ask how epistemic claims are stabilised, when ground truth is no longer explicitly constructed, and whether unsupervised learning truly escapes the epistemic contingencies of ‘ground truth-ing’, or instead redistributes them into less visible and harder-to-interrogate mathematical and computational forms. It examines how uncertainty management and truth production are displaced across technical pipelines, rather than eliminated through algorithms framed as “learning by themselves”.
My argument develops along two lines. First, through an analysis of the pipeline of an unsupervised architecture (the variational autoencoder), I show how particular modes of organising truth and knowledge become embedded in mathematical logics that reconfigure the relation between generativity and robustness. I argue that unsupervised learning does not eliminate contingent processes of truth-making; rather, it redistributes epistemic labour into formal procedures often opaque to social-scientific critique.
Second, the paper proposes a methodological reorientation. By tracing how epistemic claims are enacted through mathematical-statistical structures themselves, it demonstrates the value of engaging with how algorithms function, not merely what they do, so as to interrogate modes of epistemic classification and constraint that may seem alien to us, until they are not.
Ground truths and the epistemology of AI
Session 1