- Format:
- Pecha Kucha
- Mode:
- Presenting in-person
- Location:
- Room 1
- Sessions:
- Monday 18 May, -
Time zone: Europe/London
Accepted papers
Session 1 Monday 18 May, 2026, -Paper short abstract
Evaluation is only valuable if its insights are used. Yet, too often, findings fail to translate into meaningful action. We share three practical strategies to help bridge the gap between evaluation and action and suggest how others can implement the same principles.
Paper long abstract
Evaluation is only valuable if its insights are used. Yet, too often, findings fail to translate into meaningful action. One of our core values at the Raspberry Pi Foundation is “focused on impact”. Our impact team lead on the monitoring and evaluation of programmes which produce computing curriculum resources for schools, support a global network of extra-curricular code clubs, and provide computing CPD for educators. We aim to support project teams to achieve the four key aims of our impact strategy: do the right things; measure what matters; keep getting better; shout about it! We share three practical strategies we implement to help bridge the gap between evaluation and action.
These are:
1. Live elements. We share key metrics and live feedback with colleagues and partners through dashboards, empowering them to monitor their programmes continuously and make rapid, iterative improvements without waiting for a formal evaluation cycle to conclude.
2. Business partnering. We embed Impact Managers within project teams enabling them to understand project contexts, proactively identify monitoring and evaluation needs, and highlight relevant evidence at key moments to support decision making.
3. Capturing responses to recommendations. We work with project teams to understand and document their responses and planned actions to our recommendations to encourage accountability and action.
We provide examples of how each of these aspects helps to convert evaluation to action in computing education programmes and partnerships across the globe. We suggest how other organisations and independent evaluators can implement the same principles.
Paper short abstract
This project looks at how an evaluative culture was developed in the Media School, UAL. A robust, inclusive, participatory evaluation framework was created, supported by a toolkit and blog. This has led to a supportive culture for staff and students with meaningful, action-oriented evaluation.
Paper long abstract
This presentation outlines a three-year project on developing an evaluative culture in the teaching and learning community in the Media School at the University of the Arts London. Evaluation is central to understanding the effectiveness of teaching (Thomas et al., 2017; Thomas, 2020, Austen et al 2021). Office for Students and QAA provide extensive guidance on self-evaluation measures for institutions to assess for teaching quality; robust quantitative and qualitative evaluation was significant with TEF gold awardees (Moore et al 2023).
Yet evaluation processes are not always accessible for lecturers at the local level. A research project was set up to explore how to establish an evaluative culture within the Media School to embed educational enhancement within day to day teaching. The research involved institution case studies and research with staff and students.
This led to a set of criteria for effective evaluation and the design of an innovative, flexible evaluation toolkit for data collection that is easy for staff to implement in an on-going way. We focused on designing qualitative, inclusive, participatory tools to capture a wide range of student voices. A blog was created to present the tools and disseminate findings.
Three years on this framework is embedded into the school to assess anything from one-off interventions to course reapprovals; the 18 projects so far have covered around 700 students and 28 staff exploring themes like student confidence, experience, learning and community. Tools include things like collaborative journey mapping, advice cards and dashboards. Staff and students work in partnership through the process. For students it is reflective and meaningful part of their learning journey while for staff it is the opportunity to test innovation and pinpoint what works and build on that; we then share best practice.
The presentation will explore the key features that have helped build capability and create an evaluative culture:
• A consistent, robust approach for evaluation projects
• Funded supportive cross-school culture
• An embedded, participatory approach to capturing student voices
• Shared responsibility among staff
• The development of a thematic database of evidence
• Regular best-practice sharing events
Evaluation has led to an action-oriented approach to education enhancement. Building on theories of change and an iterative cycle (Thomas et al. 2017) each evaluation involves a statement of next steps and proposed actions, which in turn are evaluated. In developing evaluative mindsets, staff are using new tools for assessing their work, ask evaluative questions as they research and design the process into their teaching.
Takeaways from the session include:
• framework which empowers staff to implement their own evaluations
• practical, creative, varied data capture tools
• ways of sharing best practice and blog of resources
• characteristics of the evaluative culture
Evaluation can be a flexible and positive way to capture evidence of impact and effectiveness in HE. The innovative approach we have established has led to staff engagement, educational enhancements and improved conversations with students; through it we have been able to capture and celebrate staff achievements in their T&L innovations.
Paper short abstract
We share lessons from engaging diverse stakeholders in a complex theory-based contribution tracing evaluation of a homelessness intervention. We’ll highlight practical approaches and challenges, reflecting on what worked and what we’d do differently in future.
Paper long abstract
In this session, we describe how we worked with stakeholders from design through to dissemination on a theory-based contribution tracing evaluation. Contribution tracing blends process tracing and contribution analysis evaluation methods to strengthen causal inference within a theory-based approach. Interest in such complexity-appropriate evaluation methods has grown rapidly, accelerated by the UK Government’s 2020 Magenta Book supplement, bringing both new opportunities and new challenges for evaluators.
The evaluation formed part of a ground-breaking Test and Learn programme testing 9 innovative projects which aim to reduce homelessness. The programme was commissioned by the Ministry of Housing, Communities and Local Government and delivered by the Centre for Homelessness Impact. This particular project offered 20 weeks of accommodation to non-UK nationals who are rough sleeping, paired with specialist legal advice to help resolve immigration status issues.
The robustness of complexity-friendly methods relies on two core foundations: (i) strong and credible programme theory, and (ii) systematic, transparent evaluative decision-making. Both require meaningful stakeholder engagement, which can be challenging even in straightforward evaluations. When methods themselves add layers of complexity to the engagement, evaluators must approach stakeholder engagement carefully to avoid weakening the evaluation.
We share our approach to involving diverse stakeholders in theory of change development, user-journey mapping, interim evidence assessment, contribution-story refinement, testing the confirmed contribution story, sharing findings, and collaborative sensemaking. Stakeholders included people with lived experience of homelessness, sector experts, government analysts and policymakers, local authorities and voluntary-sector delivery partners. We will reflect on what worked and where we think we would do things differently in future.