Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Niccolò Tempini
(University of Exeter)
Laura Savolainen
Florian Jaton (EPFL)
Benedetta Catanzariti (University of Edinburgh)
James WE Lowe (University of Exeter)
Send message to Convenors
- Discussants:
-
Minna Ruckenstein
(University of Helsinki)
Susanne Bauer (University of Oslo)
Geoffrey Bowker (University of California, Irvine)
- Format:
- Combined Format Open Panel
Short Abstract
The study of ground truth construction practices reveals the contingent, negotiated processes underlying AI's epistemological claims. This track pursues theory-building to understand how provisional data functions as foundational 'truth' and what other approaches might yield more resilient futures.
Description
Ground truth construction (the production of datasets for training and evaluating machine learning systems) constitutes a critical site for examining the epistemological foundations underpinning contemporary AI development. While ground truths ostensibly provide objective benchmarks for model validation, empirical investigations reveal negotiated, contingent, and context-dependent processes that challenge straightforward assumptions about data, evidence, and measurement in automated systems.
This track interrogates ground truth construction as an analytical aperture into broader questions concerning knowledge production and AI development practices. An STS lens can empirically examine: how radical uncertainty is stabilised in order to enable the production of epistemic claims; what practices and socio-technical arrangements enable provisional data to function as foundational truth; how commercial imperatives, organizational logics, and platform architectures constrain what constitutes valid or sufficient evidence; and what alternative methodological frameworks might better accommodate the inherently speculative character of AI knowledge systems.
Ground-truthing practices exemplify tensions central to the conference theme of more-than-now and resilient futures. Current AI development practices privilege pragmatic expediency over robust epistemological foundations, and what works today rarely resiliently carries out to the changed contexts of tomorrow. AI systems remain notoriously brittle precisely because their foundation in the reliance on a ground truth embodies compromises between contingency and robustness. Examining these practices through an STS lens enables critical inquiry into how things could be otherwise: what are the alternative futures that are foreclosed?
We particularly welcome contributions pursuing theory-building and methodological innovation rather than purely diagnostic critique. Papers might develop novel conceptual vocabularies for understanding epistemic practices under uncertainty; propose alternative validation frameworks oriented toward resilience; offer comparative analyses across domains; or trace genealogies illuminating how current practices achieved normalization.
The organizing team brings diverse empirical grounding across multiple AI research and development domains. The combined format integrates traditional paper presentations with a commentator roundtable session facilitating collective theorization of speculative infrastructures constituting AI knowledge production.