to star items.

Accepted Contribution

Abstract ground truthing for concrete model making  
Niccolò Tempini (University of Exeter) Ravi Poorun Marc Goodfellow (University of Exeter) Peter Challenor (University of Exeter)

Send message to Authors

Short abstract

Far from representational stand-ins, ground truths are functional, pragmatic artifacts of model machinery. Examining neural mass modelling for neonatal developmental risk management, we show ground truthing as multi-layered, relational, and shaped by structural contingencies in data and model alike.

Long abstract

In this paper, we discuss practices of ground truth construction in healthcare-applied mathematics research, and specifically, their role in the quantification of uncertainty. Our argument builds on the Digital Twins for Modelling Neurodevelopment project. Mathematicians and clinical pediatricians are working to develop a neural mass model to identify the newborns who are at high risk of developmental delay, so that they can be routed through appropriate care pathways. A working model would interpret key signatures in the EEG of a sleeping newborn to this effect.

The central challenge emerges from the model's non-linear structure: multiple parameter sets can produce identical model outputs, making the interpretation of clinically-observed EEG inherently uncertain. Optimization algorithms generate partial, algorithm-dependent parameter distributions. These are validated through a second sampling method (Latin Hypercube Sampling), which constructs an ulterior ground truth, about abstract parameter space, with which to evaluate the former. This cannot be exactly accomplished, so validation relies on expert judgment visually inspecting complex datasets with a probabilistic ground truth.

This paper shows how in non-linear mathematics modelling, ground truth practices are multi-layered. A focus on the meta-level validation structure of LHS shows why comprehensive, perfect data cannot yield perfect models. Ground truthing moves away from exact pattern recognition toward flexible, robust tuning that acknowledges the limitations of initial models. Ground truths are pragmatic, functional, yet abstract and probabilistic, artifacts. Far from being representational stand-ins for reality, they are relational, operationalized constructs embedded within specific clinical contexts. This understanding reshapes how we conceptualize data and modelling.

Combined Format Open Panel CB186
Ground truths and the epistemology of AI
  Session 1