Log in to star items.
Accepted Paper
Paper short abstract
When AI outputs harden into “ground truth,” trust is instituted rather than felt. Using the material turn, I examine how datasets, standards, interfaces, and audit traces stabilize epistemic warrant. I argue “enough trust” requires structured contestability, provenance, and repair.
Paper long abstract
In a post-truth milieu, truth has not vanished; the justificatory basis of public judgement shifts from reasons to scalable evidential devices. AI outputs function as “ground truth” less because they are truer than because they embed in workflows as endpoints of inference (Bowker and Star 1999; Pasquale 2015). Hence “why trust AI?” is a normative question: when is dependence on AI epistemically justified. Reliance is instrumentally rational dependence; trust is its normative authorization, licensing an agent to shift epistemic risks to the system and its institutional carriers (O’Neill 2002; Lee and See 2004). Accordingly, “trust enough” is a threshold concept: under uncertainty, what makes the move from reliance to authorized reliance rational.
I offer conceptual analysis and a necessary-conditions argument. Where AI plays a ground-truth role, accuracy may justify reliance but cannot by itself warrant trust, because it neither secures examinability of evidential bases nor guarantees rebuttal and correction when failures occur (Burrell 2016; Pasquale 2015). Once inductive risk is acknowledged—high-stakes, asymmetric error—evidential thresholds become value-sensitive, so warranted trust must specify conditions for contestation and repair (Douglas 2000). I propose three necessary conditions: answerability (auditable provenance of data, labeling, scope, and failure modes), defeasibility (practical routes for challenge), and corrigibility (duties and mechanisms to revise data, models, and procedures under counterevidence). Only where these are secured does trust in AI attain epistemic warrant; otherwise “trust enough” collapses into efficiency-driven reliance or an authority effect that forecloses dispute.
AImagineries of the social: The adoptions of GenAI in making knowledge on social realities
Session 2