- Format:
- Pecha Kucha
- Mode:
- Presenting in-person
- Location:
- Room 5
- Sessions:
- Thursday 21 May, -
Time zone: Europe/London
Accepted papers
Session 1 Thursday 21 May, 2026, -Paper short abstract
Reflective practice models can strengthen evaluation cultures by supporting collective learning, sensemaking and interpretation of findings. This presentation shares practical examples of using reflective practice to embed evaluative thinking across teams, organisations and partnerships.
Paper long abstract
How can using reflective practice models help build evaluation cultures?
Building evaluation cultures is as much about mindset and group dynamics as it is about process and structures. When evaluation encounters reflective practice, transformative shifts can occur in the way we and our organisations learn, interpret and make meaning throughout the evaluation cycle.
Reflective practice developed in disciplines including teaching, social work and medicine as a powerful approach to learning from professional experiences, and using this learning to improve practice for the future. “Reflective practice is the ability to reflect on one's actions so as to engage in a process of continuous learning.” (Schon, 1983). This definition can be applied beyond the individual to describe evaluation at a project, organisation or system level.
In order to build strong, inclusive, inquisitive evaluation cultures it is necessary to support people, teams and organisations to expand the framing of evaluation as a purely technical or bureaucratic process, and develop the capacity and space for evaluative thinking and sensemaking. Reflective practice models offer an opportunity to do just this, but how to practically apply them in an evaluation context remains underexplored.
In my experience as an evaluator and facilitator, I’ve found time invested in designing and delivering bespoke reflective practice workshops for teams involved in evaluation can be highly effective for embedding evaluation cultures. They can also be a valuable feature of learning partner relationships, an increasingly popular form of support for organisations, partnerships or programmes that focuses on enabling, analysing and embedding learning alongside or as part of an evaluation process.
Experienced reflective practice facilitation can help navigate power dynamics within groups and hold space for difficult conversations when things have gone wrong. Reflective practice is also a useful method for involving and engaging non-evaluators in evaluation, particularly around the interpretation of data and learning and turning insights into action.
In this presentation, I will share up to three practical examples from my work alongside a reflective practice facilitator and trainer with a background in the mental health sector:
1. Embedding reflective practice as a framework for ongoing learning or process evaluation, to enable iterative insights to be fed back into a programme in real time
2. As a tool for supporting a team or partnership to understand and process when something has gone wrong, a relationship has broken down or findings indicate perceived “failures”
3. During the analysis and reporting stage, as an alternative to conventional validation workshops, to deepen collective understanding and ownership of results
Paper short abstract
Drawing from diverse programme examples, this session presents a three-stage approach as a tool to help programmes to prioritise and learn while embedding evaluative practices into programme management process.
Paper long abstract
In many development settings, evaluation remains a standalone activity, often disconnected from programme design, delivery, and decision-making. This limits its ability to inform adaptive management, foster learning, and drive meaningful change. With funding uncertainty, such disconnects become more problematic, as programmes must make strategic decisions with limited resources and shifting priorities. In these contexts, promoting a culture of evaluation is not a luxury but a necessity. To support this, programmes must create environments where evidence is actively collected, co-owned by teams and stakeholders, and responsive to their needs.
Drawing on practical experience from diverse programme contexts including Public Finance Management, Youth and Community Empowerment, and Forest Land Use Governance, this session presents a three-stage approach to institutionalising evaluation in budet constrain contexts. The approach helps implementing teams prioritise interventions, adapt strategies, and strengthen learning in the face of uncertainty. It offers a practical roadmap for making evaluation a core part of programme culture and decision-making.
Embedding Evaluation in the Theory of Change
The first stage focuses on integrating learning and evaluation priorities into the programme’s Theory of Change. This means a simple theory of change should be in place to articulate outcome pathways and their assumptions, which should be supported by feedback loops for learning and adaptation. Co-developing the Theory of Change with stakeholders ensures evaluative thinking is embedded from the outset, shaping how change is understood and how learning takes place. In resource-constrained settings, this clarity helps implementing teams focus on what matters most and make informed trade-offs. Equity and inclusion lenses are also critical at this stage prompting teams to ask whose voices are represented and whose experiences are being measured.
Operationalising Evaluation in Results Frameworks
Results frameworks must evolve beyond static result indicators to capture change over time. Indicators should be designed to track progress in stages, allowing for nuance and adaptation. In this context the use of rubrics and participatory scoring tools can provide teams with opportunities to learn as they reflect on their progresses. In uncertain funding environments, linking results frameworks to learning moments enables teams to reflect, reprioritise, and adjust course with agility.
Embedding Evaluation in Routine Programme Management
Routine programme management can incorporate periodic evaluative practice. This involves embedding evaluative activities into everyday workflows such as reflection sessions during team check-ins, collaborative workshops for scoring and interpreting results, and feedback forms that capture team and beneficiaries reflections. These tools generate actionable data and build ownership of evidence across spectrums. In times of financial constraint, this embedded approach ensures that evaluation remains relevant and cost-effective supporting improvement without requiring separate, resource-intensive processes.
Through interactive formats, this session inspires discussion on how to embed evaluation into programme design, results frameworks, and management helping teams prioritise, adapt, and learn in the face of funding uncertainty, while fostering inclusive, evidence-driven decision-making. This timely discussion enables participants to critically reflect on their evaluation practices in the current context of reduced aid budget.
Paper short abstract
Together an Active Future (a collective of organisations across Lancashire) has learned what it takes (and what doesn't work) to embed evaluation across a complex system, shifting to everyday learning.
Paper long abstract
This presentation shares learning from Together an Active Future (TaAF), Sport England’s place-based programme in Pennine Lancashire, which has spent the past five years embedding a culture of evaluation and learning across six local authority areas. The work sought to move evaluation from external audit to collective sense-making — positioning insight, reflection, and storytelling as everyday practices rather than compliance tasks.
Situated within Sport England’s wider place-based approaches now scaling across England, TaAF illustrates both the potential and the difficulty of embedding evaluative thinking across diverse systems. While partners increasingly value reflection, it remains one of the biggest challenges for practitioners: knowing what to measure, how to interpret complexity, and how to turn insight into action.
Using realist and developmental evaluation methods, this study explores the conditions that help evaluation “land” within systems. Data is gathered from reflective journals, Stories of Change, Ripple Effect Maps, Conditions Trackers, and more than 60 partner interviews. Analysis focuses on how learning travels — who reads, who acts, and what enables evidence to be used in real time.
Findings show that evaluative practice gains traction when people show up in person to listen, learn, and share their work openly; when reflective spaces are regular and trusted; and when responsibility for learning is written into job roles and governance structures. Evaluation becomes visible and valued when it connects directly to decisions, storytelling, and community voice.
The work contributes to understanding how systems develop the capacity to evaluate themselves, revealing that embedding evaluation is not a technical process but a cultural and relational one. It offers practical insight into how real-time, participatory evaluation can strengthen local learning ecosystems and influence policy and investment decisions across place-based systems.