Log in to star items.
Accepted Contribution
Short abstract
Drawing on Eco's interpretive semiotics, this paper conceptualizes ground truth construction as supervised semiotic closure. Through the notion of Model Reader, overcoding/undercoding, and aberrant decoding, it examines how annotations compress meaning, with epistemological impacts on AI outputs.
Long abstract
Ground truth construction — the production of labelled datasets for training and evaluating ML models — is a functional prerequisite for AI development. It is also an epistemological operation: it transforms open, interpretable cultural artifacts into fixed categorical assignments serving as foundational knowledge claims. Drawing on Umberto Eco's distinction between dictionary and encyclopedic models of meaning, I conceptualize ground truth as a form of supervised semiotic closure: the suspension of interpretive openness to produce verifiable data. Where meaning is understood as an unlimited network of culturally situated associations (encyclopedia), annotation practices impose dictionary logic (bounded, stable definitions) onto inherently polysemous material. In LLMs, the cost of this conversion is made invisible by technocultural apparatuses. This paper argues that Eco’s interpretive semiotics provides a conceptual vocabulary for analyzing this transformation and its epistemic costs. Three concepts prove particularly relevant. (1) The Model Reader; (2) Overcoding/Undercoding ; (3) Aberrant decoding. The paper develops this framework through a case study on annotation practices for art and archaeology photographic records, arguing that the brittleness of contemporary AI systems is a predictable consequence of semiotic closure: artificial systems trained and validated on limited human interpretation and knowledge tend to fail when they encounter contexts where the interpretive chain needs to continue further. By applying Eco’s semiotics on ground truth analysis, this paper contributes a novel vocabulary for understanding epistemic practices under uncertainty in AI research.
Ground truths and the epistemology of AI
Session 2