Log in to star items.
- Convenors:
-
Annette N. Markham
(Utrecht University)
Jessica Enevold Duncan (Lund University)
Riccardo Pronzato (IULM University)
Sarah Barns (RMIT University)
Send message to Convenors
- Format:
- Combined Format Open Panel
Short Abstract
What methods can foster enduring and adaptive critical literacies about emerging tech-intensive futures? Panelists present approaches, experiential practice, narratives of intervention, creative experiments, failures, and challenges. A workshop on workshopping techniques follows panel presentations.
Description
What methods can foster critical and enduring literacies about tech-intensive futures? With the term “critical literacies,” we refer to forms of intervention grounded in critical theory and holding a “better futures” orientation, whether in relation to media, algorithmic, data or AI-based systems. The past decade has witnessed a groundswell of initiatives among practitioners across domains to improve critical literacies through creative engagement techniques, focused on different socio-technical systems and processes. However, these efforts-- to shift from fleeting to more sustained literacies that promote action, resistance, and change-- are continuously thwarted by the persuasive power of Big Tech, the design and seamless ease of technical interfaces, and the hegemonic acceptance of futures that seem “determined” by inevitable forces beyond individuals’ or communities’ controls. The most recent case in point is the struggle of educational institutions to respond to GenAI in ways that effectively resist tech-driven solutionism or grand promises of greater educational futures through AI.
In this combined format panel, accepted contributors present examples of cutting edge approaches, experiential practices, narratives of intervention, ongoing creative experiments, or failures and challenges related to the development of critical literacies from differing disciplinary perspectives.
Panel(s) will be followed by a workshop on workshopping, where participants are invited to collaboratively explore methods in action by selecting among three workshops, each activating critical literacies through different critical pedagogy-informed techniques, including: 1) speculative future-oriented design thinking; 2) data walking, and 3) autoethnographic situational mapping. As they are enacted, workshops are also discussed at a meta level as facilitators and participants ‘think aloud’ about choices for adjusting techniques in situ. Participants are invited to reflect on workshop best practices by considering projects wherein they want to develop workshops, or challenges they want to tackle in existing workshops. Workshop facilitators have longstanding expertise in workshop facilitation and critical pedagogy techniques.
30-person workshop needs movable tables, projector/screen, 5-6 flipchart easels
Accepted contributions
Session 1Short abstract
Media organizations face many ethical challenges around adopting AI. We present D4M, a workshop-based framework that fosters critical data and AI literacy through group reflection on technologies and their use context. We discuss findings of 3 workshops centred on real-world data and AI projects.
Long abstract
Encouraged by powerful narratives to get on the AI hype train or risk becoming obsolete, many media organizations are experimenting with AI. At the same time, AI use in media organizations presents complex ethical issues, including bias and discrimination, job displacement, copyright infringement, and the risk of eroding public trust in journalism. Given these developments, media organizations are looking for guidance on using AI responsibly. Meanwhile, their own guidelines often are too abstract for practical implementation.
In this talk, we present the Data Ethics Decision Aid for Media (D4M hereafter)—a framework developed by Utrecht University’s Data School in collaboration with media conglomerate DPG Media—and discuss how the D4M workshops aim to foster critical data and AI literacies. Starting from the premise that critical inquiry is a trainable skill, D4M prompts reflection and stimulates workgroup participants to ask pertinent and critical questions about a real-world data or AI project. In these sessions, participants learn from one another about a project’s technical aspects and, crucially, about the broader social, organisational, and legal context in which the technology is embedded. As such, D4M highlights both the normative dimensions of critical literacy and the socio-technical nature of data and AI. Drawing on three D4M workshops, we reflect on how the framework helped foster awareness not only of the risks, but also of the alternative approaches and courses of action available.
Short abstract
Our AI Citizenship informs a prototype called Play-AIble Ethics-a game-based learning experience addressing the predominantly utilitarian orientation of many AI literacy frameworks and advance an alternative approach to AI literacy that foregrounds citizenship, rights, and collective empowerment.
Long abstract
Many AI literacy frameworks emerge from human computer interaction design and behavioural science fields, with little to no engagement with the extensive scholarship on media, digital, or data literacies. To address this, we translated the Data Citizenship framework (Carmi and Yates, 2023) into AI Citizenship. The Data Citizenship framework views data literacies as the capability (Nussbaum, 2002; Sen, 2009) of citizens to: do; think and participate with their data socially and politically. While acknowledging the exploitative nature of AI systems, our AI Citizenship framework aims to use this technology to design a society-centred, community-oriented set of AI Open Educational Resources which will challenge social power structures.
Based on our AI Citizenship framework, we built a prototype called Play-AIble Ethics, a game-based, hands-on learning experience and related toolkit resources. Instead of lectures or technical training, Play-AIble Ethics embeds real world cases in interactive stories, with missions that people play through at their own pace. We applied the AI Citizenship Framework to five real-world use cases reflecting challenges citizens encounter with AI. Participants are guided by a set of expert characters to help them develop more informed and ethical, real-world, task-based outputs using generative AI. These were developed as interactive scenarios powered by small-scale language models, enabling users to curate their own repositories through Notebook LM. Along the way, they build practical skills for working with AI, learn how to question and challenge AI systems, and explore how they can take part in shaping how these technologies are used.
Short abstract
The card game „Monsters, Miracles, and Metaphors“ invites players to reflect on how dominant narratives and metaphors shape our understanding and discourse around AI.
Long abstract
Metaphors are a crucial component of human sense-making, especially when it comes to emerging phenomena. Recently, there has been considerable scholarly interest in the metaphors used to describe AI. The spectrum of metaphors ranges from utopian to dystopian, from allegoric to immaterial, from light-hearted to heavy hitting.
Considering the crucial role these metaphors play in delineating the possibilities of AI for our technological futures, it is paramount to continuously examine which narratives emerge, which ones get pushed by whom and which ones get adopted.
In this talk we discuss the potential of card games as a pedagogical device for critical engagement with discourse. Historically, playing cards started out as luxury items – an amusement reserved to the elites. As printing technologies evolved, card decks became available to a broader public and quickly were adapted as a way to play, to transform, to subvert the hierarchical structure that the cards represented.
Building on these subversive histories, we illustrated and designed a set of playing cards that collects the dominant narratives around AI and contrasts them with a few we conjured up ourselves.
Modelled after the game quartett, „Monsters, Miracles, and Metaphors“ offers three different modes of play. The game invites the players to confront narratives around AI, to ask how fantastical or far-fetched the depictions are and what they might foretell about our futures. The presentation reflects on the development process and on our experiences facilitating game-based workshops, and discusses the possibilities and limitations of this method.
Short abstract
With Music AI as the case study, our critical interdisciplinary AI course invited computing students to move beyond techno-solutionist framings. Drawing on teaching and student experiences, we reflect on the challenges and what they mean for developing critical interdisciplinary pedagogies.
Long abstract
Calls to educate computing students “in the public interest” have often taken the form of incorporating Ethics and Responsibility courses across computer science degrees, with particular attention to AI. While welcome, these efforts can remain framed within assumptions inherited from technical disciplines, in which interventions are limited to technical fixes, overlooking the need to interrogate AI systems as sociotechnical assemblages resulting from the mutual mediations between society, culture and technology.
In this paper we reflect on our approach to developing and teaching a critical interdisciplinary AI course focused on the cultural ramifications of Music AI. The eight-week course was taught by researchers from anthropology, science and technology studies, music studies, creative practice and computing (among other disciplines) who are themselves researching critical approaches to Music AI. The course introduces students to a range of perspectives on Music AI and encourages them to develop greater reflexivity about their technical work, its promise and its limits. The aim is to teach them how to articulate complex sociotechnical problems, broadening their scope beyond technical concerns, and on this basis, empowering them to eventually build different and better systems.
We reflect on how the course has worked in practice, drawing both on our experience delivering it and the students’ we interviewed. We discuss what was challenging, focusing on communication across disciplines and on navigating tensions between the pedagogical methods of the social sciences and humanities and those of STEM. We conclude by presenting provocations to induce a critical interdisciplinary transformation in higher education.
Short abstract
I propose a short discussion of artworks and creative projects that explore ecologies of digital materiality through participatory art practice and illustrate how these artworks could contribute towards radical pedagogies in computer science education.
Long abstract
I am a practice based Phd student at Winchester School of Art (University of Southampton) and educator at University Arts London, where we have a research group engaging with the entangled relations between computation and the climate crisis called 'Critical Climate Computing'.
My recent work is focused on developing critical mineral literacies through participatory art practice and combining energy politics with land art and walking practices. Since 2022, I have been conducting ‘Rare Earth Walks’ - guided artist-led walks exploring cultural narratives around critical minerals that consider geological media and digital materiality through non-human narratives. I have delivered educational workshops for KS1 and KS2 aged young people developing digital literacy through ecological and environmental practice and ask how these artistic methodologies might contribute towards transformative pedagogies in emergent models of experimental computer science education.
This proposal intersects with a number of relevant fields across art, activism & education such as 'Permacomputing' a design philosophy for re-thinking computing that mirrors how permaculture has shaped industrial agriculture and 'Ecomedia' - a field of digital humanities that produces curricula for prioritising the inherent environmental materiality of media in education. What could be gained by applying principles of Forestry School education with critical computing projects that teach computational literacy through the political action of environmental and social justice movements? Can transformative pedagogies from the arts foster empowerment, agency & action for young people navigating the polycrisis caused by big tech & extractive capitalism?
Short abstract
This contribution proposes speculative fabulation and design thinking "AI in Education" workshops as interventions where participants can resist the frame of inevitability and strengthen critical AI and futures literacies by reclaiming agency in shaping otherwise-possible educational futures.
Long abstract
Tech-driven solutionist imaginaries reverberating across EdTech and, increasingly, AI-higher education discourses consolidate a frame of inevitability through which technologies are experienced as shaping the horizons of educational futures. Within this frame, the hegemonic force of Big Tech casts the future as singular and foregone, foreclosing spaces in which plural futures might be imagined and brought into collective deliberation. Under such conditions, education is narrowed to quick-fix adaptation, as pedagogies and institutions are reshaped to accommodate a supposedly fait accompli future.
This contribution presents the “AI in Education” Workshop A+B series, developed and facilitated by Prof. Dr. Annette Markham, as an intervention through which educators might resist such inevitability. Workshop A is a speculative fabulation workshop organized around four “what if?” scenarios of possible future universities. Workshop B extends this through a design-thinking process in which participants develop assignment and assessment pilots for testing within existing programmes.
Drawing on 10 iterations of these workshops, I trace how they cultivate the conditions through which educators reclaim agency in imagining and working toward the enactment of otherwise-possible educational futures, while also revealing the sticking points that make imagining beyond current affective, institutional, and technological frames difficult. I further argue that these workshops can strengthen critical AI and futures literacies by cultivating a more resilient orientation – one that sustains iterative pedagogical experimentation amid shifting technological conditions while keeping educators engaged with the fundamentals and inviting them to radically re-imagine teaching and learning with the values and purposes of higher education at the forefront.
Short abstract
This presentation discusses a human-machine communication class and a counter futures project designed to develop critical literacy skills that challenge dominant techncultural myths. It also invites discussion to consider the (im)possibilities of expanding these methods beyond the classroom.
Long abstract
Examining hegemonic myths relating to technology to denaturalize them is a continual challenge animating work in science and technology studies, and media and cultural studies. In this presentation, I present one attempt to design and guide a Human-Machine Communication class dedicated to the project of challenging myths associated with communicative AI and social robots. I explore specific challenges and opportunities of working with students at a STEM-oriented, US-based institution to question naturalized technocultural assumptions and develop critical orientations toward technology. Most of this presentation focuses on exploring a counter futures group project I designed for the class inspired by projects like counter-n[dot]net, which provide ideas to question dominant technological narratives and pathways to challenge the immanence of technoimperialist futures. Students worked in groups to produce an imagined future based on the identification of a counter-hegemonic value, the creation of a new myth based on that value, and a conceptual prototype of a communicative machine that could emerge given those conditions. The assignment helped students interrogate the way materiality, myth, and social values intersect to create the conditions for the emergence of new technologies and practices and imagine the technocultural world otherwise. Beyond sharing my experience with the class and the project, I hope to discuss the possibilities of expanding this type of praxis to support critical literacies beyond the classroom and to critically reflect on ways these methods may be co-opted to reproduce the dominant myths they seek to challenge.
Short abstract
From self-reflection using a radical, queer, anti-colonial tarot deck to speculative fiction using progressive storytelling analyzed by GenAI, this contribution presents an ongoing creative experiment in teaching gender, race, and information to undergraduates studying informatics.
Long abstract
This contribution presents an ongoing creative experiment in cultivating sustained critical literacies about technology, information systems, and algorithmic futures within an undergraduate informatics course on gender, race, and information. The course employs unconventional methods to help students critically examine how information technologies reflect and reproduce power relations around race, gender, and ability.
Students begin by engaging with their own histories and particularities through “Tech Confessional” narratives and engaging with the art of the Next World Tarot to inspire self-reflection. Next World Tarot (Road, 2020) is a radical, queer, anti-colonial deck that illustrates a journey about owning truths, finding connections with bodies that may have been lost through trauma or societal brainwashing, smashing systematic oppression, taking accountability, facing challenges, and ushering in alternative futures. These exercises inspire self-reflection and help students critically engage with their assumptions about technological progress. Later, students create speculative fiction using progressive storytelling techniques, then analyze their own narratives using GenAI tools as both objects and subjects of inquiry.
This approach considers critical literacies in relation to infrastructures of imagination (Benjamin, 2024; Potts & Facer, 2025). What learning activities enable students to uncover/unsettle assumptions, encounter diverse perspectives, and desire collective visions for transformation? The presentation reflects on successes, failures, and ongoing challenges, particularly around resistance to datafication of education.
Short abstract
A creative intervention that embeds critical AI literacy and ethics into postgraduate research methods training. Through reflective exercises such as data flow mapping and speculative futures research design, it examines assumption of neutrality, expertise and technosolutionist attitudes.
Long abstract
This creative intervention aims at embedding critical AI literacy and AI ethics into a research method training context. Research students across levels currently turn to AI tools in order to automate aspects of the entirety of the research process, from undergrads who dive deep into research design when they tackle a dissertation, or more researchers are lured by the ease of automation. As higher education policies focus on adoption (AI as set of tools) and plagiarism (AI as cheating), there is less time and energy spent in understanding the critical issues that underlie AI use, its ethics and epistemological implications. This intervention addresses the need to enhance awareness of AI as an actor in the production of knowledge, and to develop literacy about the ideological, political and economic stakes of AI infrastructure globally. It consists of a pilot lasting for one term in a postgrad setting (MA in Strategic Communication) in a course on applied research methods. The critical AI exercises are reflective and structured around milestones, while the collection of data concern three pillars: assumptions of neutrality, perceptions of expertise, and attitudes of technosolutionism. The exercises include a mapping of data flows and a speculative futures research design (where participants create a future research pipeline scenario with and without use of AI). The aim is to create an AI ethics and critical skills toolkit that can be adapted more widely in research training across disciplines. In the EEAST session, my focus will be on introducing this replicable framework.
Short abstract
This presentation showcases collaborative zine-making from the pliegos.net project as practices of critical literacy about AI and platform power. Participatory workshops combine analog publishing with experimental tools for zine pagination and PDF obfuscation to resist automated content extraction
Long abstract
Recent activist and investigative zines addressing AI-powered surveillance and border enforcement circulate both as printed artifacts and as digital files (https://www.aimustdie.info/; https://www.404media.co/icezine/), while in some contexts zines themselves have been treated as incriminating materials in repression cases (https://freedes.net/). These situations reveal a paradox: even small-scale analog media practices are entangled with infrastructures shaped by / for platform monopolies, automated content extraction and expanding surveillance regimes. At the same time, their material circulation creates forms of communicative opacity that can partially evade algorithmic monitoring, recalling historical underground networks like Soviet 'samizdat'.
This contribution discusses a set of zines and publishing experiments developed within the pliegos.net project (Senabre Hidalgo & Espelt, 2025), examining how collaborative zine-making can function as a form of critical literacy about / against tech-intensive futures. Rather than approaching zines primarily as historical artifacts of subcultural media, our presentation focuses on their contemporary re-emergence as participatory practices through which communities reflect on, question and respond to technological systems and narratives of (digital) inevitability.
The showcased materials emerge from participatory action research conducted through collaborative zine-making workshops combining analog co-writing, on-site printing and immediate physical distribution. The presentation also reflects on emerging threats to zine circulation, introducing ongoing experiments with open-source tools for zine pagination and PDF obfuscation designed to complicate automated content extraction and enable digital files to “jump back” into physical circulation.
Ref: Senabre Hidalgo, E., & Espelt, R. (2025). Chapbooks against the machine: analog co-writing and publishing as a collective geography of AI refusal. cultural geographies, https://doi.org/10.1177/14744740251355281
Short abstract
Based on the approach of socio-technical imaginaries as situated and performative narratives, this paper presents the visions of the future that bachelor students have been asked to write for a participatory research program dedicated to narratives and agency in the ecological transformation.
Long abstract
Visions and promises of tech-intensive futures are means to colonize the imaginary and pave the way to emerging technoscientific industries (Brown 2001 ; Joly 2010 ; Jasanoff and Kim 2015 ; Audétat et al. 2015). Indeed, visions promoted by stakeholders do narrow down and mask the plurality of socio-technical imaginaries that envision environmental transformations and technology in society. But what are the actual adherence to, and affects associated with, the various socio-technical imaginaries, including the contemporary and often extreme technosolutionism? One of the first goals of the four year STRIVE project of the University of Lausanne is to study the representations of sociotechnical imaginaries of bachelor students with the letter from the future method (Sools 2020). Students of different disciplines are asked to imagine themselves in 25 years and write a letter to their self in the present time, express emotions, aspirations and concerns. The findings show narratives that reflect heterogeneous, nuanced, and affectively diverse representations of the future that challenge assumptions about Western deterministic sociotechnical imaginary (Durosier et al., 2026). The knowledge gathered with the letters from the future is going to be used to design participatory exercises based on future thinking methods, to be held in the tech managers and the educational milieus, as means to foster critical literacy about technoscientific promises and encourage participants to avoid passive attitudes toward the (their) future. The researchers of the STRIVE program are keen to learn and exchange about workshopping techniques in order to design the next phases of their project.
Short abstract
This talk presents ArtScience methods — creative experiments with extended reality, installations, and interactive storytelling — from artist residencies in science and technology research. They foster critical literacies by closing the gap between human senses and data through play and agency.
Long abstract
As knowledge production in emerging tech-intensive futures becomes increasingly mediated by AI and large-scale data infrastructures, both researchers and publics experience growing experimental distance — a gap between technological systems, their data, and our senses that limits engagement, agency, and critical reflection.
This presentation examines artistic methods as interventions that can help close this gap and foster critical literacies about emerging technological futures. Through ArtScience collaborations embedded within research environments, artists and scientists co-create experiential settings — including extended reality (XR) artworks, immersive installations, and interactive storytelling — that translate abstract phenomena and technologies into sensory encounters. These experiments enable participants to explore complex systems through play, spatial reasoning, and embodied engagement.
Drawing on case studies from residencies integrated through the ARTlab Nottingham and projects including the Cosmic Titans exhibition and the generative-AI initiative Beyond Resonance, I present interventions in which artists work within physics and technology contexts to make otherwise intangible phenomena tangible — such as the vast scales of the universe, the first seconds after the Big Bang, or emerging quantum and AI technologies. In immersive environments experienced by around 100,000 people, participants can adopt more-than-human perspectives to explore and navigate complex data and technological systems.
Evidence suggests immersive engagement enhances understanding, pattern recognition, agency, and sense-making. It can stretch perspectives, reveal limitations, and improve system design. The talk reflects on benefits, challenges, and limitations of immersive ArtScience collaborations as methods for cultivating critical literacies about emerging tech-intensive futures. XR environments will be demonstrated if the format allows.
Short abstract
Insights from the interdisciplinary project on DAS & dark fibre networks: drawing on artistic and activist-led research, feminist STS and science fiction to create community methods that enabled a bottoms-up approach to literacy and agency-enabling conversations on governance.
Long abstract
Distributed Acoustic Sensing (DAS) is a technique that transforms fibre-optic communication cables into continuous sensors: pulses of laser light sent through the fibre detect changes in backscattered signals caused by vibration or strain, enabling the cable to register disturbances along its length. The paper draws on the interdisciplinary research project Soundscale which explores the potential of urban dark fibre (under-utilised) networks as sensing infrastructure for smart city applications. Similar systems in other contexts are being actively developed for military and security purposes such as border monitoring and infrastructure protection. A sustainable and trustworthy governance of such technologies, with active consultation from the public, therefore, becomes of utmost importance.
I would like to share the strand of the project that drew on artistic and activist-led research, critical theory, feminist STS and science fiction to create community participation methods that enabled a bottoms-up approach to literacy and agency-enabling conversations in relation to DAS governance. In the funding landscape where emerging technologies development usually comes late to citizens consultation, this becomes increasingly important. I will use my position within this project to share the methodology used in this project over the last 6 months, including exercises on science-fiction imagining, artistic research and community work. Employing these methods throughout the project aimed to avoid the traditional route of “art-as-output” for scientific process, but also produced new answers to some disciplinary traps and questions of validity, participation and media literacy.
Short abstract
This contribution describes the ideation, design and production of a physical exhibition and associated media making as a form of critical research method and dissemination, intended to promote critical engagement with the environmental politics of technology.
Long abstract
In contemporary Ireland, where big tech and a pro-industry government seek access to ever more resources for expanding AI infrastructure and the tech economy at large, there is an ever greater need to critically engage with the complex and often contradictory relationships between the technology industry, resource scarcity and the climate crisis. In this context, the author shares their recent practice of using critical and creative methods to undo the prescribed logics and the dominant narratives of technological inevitability in the case of Ireland. The author details the process of combining material experimentation with historical research and critical thinking about the environmental politics of data technologies. This includes the design and making of a large scale physical exhibition for a wide public audience, designed to provoke embodied and visceral understandings of how data processing and storage rely on intensive consumption of resources and territorialise the physical environment. Using the prism of heat, to foreground the thermodynamic processes necessary for data production, storage and distribution, the installation asserted that the production and dissemination of information is intrinsically connected to the production and dissemination of heat. In conclusion, the author reflects on the need for alternative narratives and visual languages; winning over the unconverted; and making room for critical technology literacy in engineering contexts.
Short abstract
We present the structure, techniques, and results of three variations of our use of a critical AI literacy methodology, a guided autoethnographic DIY toolkit where students produce field diaries analyzing their relationships with genAI, generating deep reflection, critique, and future imaginaries.
Long abstract
In recent years, genAI systems have emerged as key, intimate actors in the granular processes of interaction in which socio-cultural relations are shaped, individuals are constituted as subjects and meaning-making activities and identity-formation processes emerge. This has brought considerable new attention to the importance of critical literacies about genAI and emerging machinic capabilities within complex ecosystems of interdependent human and non-human activities (Markham and Pronzato, 2023; Pangrazio, 2026).
However, in this scenario, sustained literacy is difficult to achieve: individuals confront computational systems that are extremely difficult to comprehend, with interfaces designed to encourage continuous use and illusion of control (Pronzato and Markham, 2023), while Big Tech’s enormous power and persuasive narratives normalise techno-determinist futures as inevitable (Markham, 2021).
We present three applications of Markham’s 2012 guided autoethnographic methodology: a three-day, a two-week, and a 21-day version, where facilitators train students to study their own relations with AI. Beyond focusing on how they or other students define, use, and feel about GenAI currently, they consider how current trends are influencing the future of higher education. This guided autoethnography method ー grounded in interpretative and narrative inquiry, critical pedagogy, technofeminism, and critical theory ー is useful for understanding student’s experiences, but more, fosters self-reflexivity, critical examination of how AI seeps into everyday life, and activates critical consciousness that is deeply personal, foundational to strengthening critical AI literacies. The project involves IULM University, Queensland University of Technology, and Utrecht University, and is part of the work of the Futures+ Literacies+ Methods Lab (FLL).