Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
Accepted Paper:
Paper short abstract:
The paper presents an LLM-based tool for assessing Open Source Investigations (OSI), developed in collaboration with OSI practitioners. The project sparks critical discussion on the role of LLMs in evaluating information and establishing trust, and is informed by STS-scholarship on truth production
Paper long abstract:
Open Source Investigations (OSI) leverage the abundance of digital data for investigative and conflict reporting, especially crucial in the “post truth” era wherein experts and non-experts alike produce narratives of all sorts about “what actually happened”. OSI engage with this complexity and reclaim the internet for evidence production, emphasizing information sources, providing transparent methodologies, and inciting the participation of digital publics.
Given the known proficiency of Large Language Models (LLMs) in classifying and analyzing vast datasets, our work explores the intersection of LLMs with OSI. Together with OSI investigators, we developed an AI tool that leverages LLMs for the assessment of OSI. The project first involved the tracing of different practices, strategies and communication styles that resulted in a database for assessing claims based on OSI. This database was subsequently programmed into a GPT-style tool.
In the paper, based on insights from STS that facts are not merely discovered but constructed through associative processes, we reflect upon how OSI communities evaluate things such as broken links and the curation of images and we discuss how we may rely on machine learning. Our project, besides being of practical use, serves as an occasion to discuss the role of LLMs in the context of information and trust against the backdrop of STS-scholarship on truth production.
STS, AI Experiments, and the social good
Session 1 Thursday 18 July, 2024, -