Log in to star items.
Accepted Contribution
Short abstract
Media organizations face many ethical challenges around adopting AI. We present D4M, a workshop-based framework that fosters critical data and AI literacy through group reflection on technologies and their use context. We discuss findings of 3 workshops centred on real-world data and AI projects.
Long abstract
Encouraged by powerful narratives to get on the AI hype train or risk becoming obsolete, many media organizations are experimenting with AI. At the same time, AI use in media organizations presents complex ethical issues, including bias and discrimination, job displacement, copyright infringement, and the risk of eroding public trust in journalism. Given these developments, media organizations are looking for guidance on using AI responsibly. Meanwhile, their own guidelines often are too abstract for practical implementation.
In this talk, we present the Data Ethics Decision Aid for Media (D4M hereafter)—a framework developed by Utrecht University’s Data School in collaboration with media conglomerate DPG Media—and discuss how the D4M workshops aim to foster critical data and AI literacies. Starting from the premise that critical inquiry is a trainable skill, D4M prompts reflection and stimulates workgroup participants to ask pertinent and critical questions about a real-world data or AI project. In these sessions, participants learn from one another about a project’s technical aspects and, crucially, about the broader social, organisational, and legal context in which the technology is embedded. As such, D4M highlights both the normative dimensions of critical literacy and the socio-technical nature of data and AI. Drawing on three D4M workshops, we reflect on how the framework helped foster awareness not only of the risks, but also of the alternative approaches and courses of action available.
Futures and Critical AI Literacies: Resisting inevitability narratives through creative methods and critical pedagogy
Session 1