to star items.

Accepted Paper

When LLM Outputs Become Evidence: Reconceptualising Pre-Digital Audit Trails for Post-AI Adjudication  
Nneoma Ogbonna (Northumbria University)

Send message to Author

Paper short abstract

The reliance on Large Language Model(LLM) outputs as evidence by 'non-human informants' challenges pre-digital adjudicatory frameworks. To bridge this, I propose the Explainable Audit Trail(XAT), a post-AI reconceptualisation of the audit trail, built on empirical and interdisciplinary research.

Paper long abstract

Generative AI systems, such as Large Language Models (LLMs), are increasingly used in the justice system in England and Wales to process forensic audio and textual data, performing tasks such as transcription, translation, summarisation, and interpretation. Their outputs may serve as ‘expert evidence’ that judges and juries must evaluate, positioning LLMs as ‘non-human informants’. In this sense, adjudication resembles qualitative inquiry, relying on interpretation, triangulation, and assessments of adequacy, albeit within the safeguards of trial fairness.

LLMs introduce familiar challenges, like inaccuracy and opacity, yet traditional mechanisms for testing reliability, such as cross examination and summative reports are poorly suited to them.This underscores the need to reconceptualise how AI outputs are assessed for reliability in criminal adjudication.

This research draws on audit trails, a long-standing tool for documenting how information is created and interpreted across disciplines, including digital forensics and qualitative research. With roots dating back to the fifteenth century, audit trails provide a pre-digital foundation for evaluating ‘non-human informants’.

I propose the Explainable Audit Trail (XAT): a reconceptualisation of the traditional audit trail designed to enhance the reliability assessment of AI systems. Grounded in empirical analysis of digital forensic practice and interdisciplinary scholarship across law, human–data interaction, explainable AI, and scientific communication, XAT provides process transparency across the evidential lifecycle. It documents how LLM outputs are generated and interpreted, enabling courts to assess reliability in a structured way. Through XAT, I demonstrate how pre-digital methodologies can support post-digital evaluation of LLM outputs.

Traditional Open Panel P201
The Futures of Qualitative Inquiries: Post-Digital Methods, Pre-Digital Methodologies