- Format:
- Pecha Kucha
- Mode:
- Presenting in-person
- Location:
- Room 2
- Sessions:
- Monday 18 May, -, -
Time zone: Europe/London
Accepted papers
Session 1 Monday 18 May, 2026, -Paper short abstract
Evaluating sustainability is complex and lacks regularity and depth beyond ‘results sustained’. The UKES award winning EAGI approach recently informed the baseline evaluation design for the FCDO BRAVE programme. It was adapted into a scorecard aiming to support a robust sustainability assessment.
Paper long abstract
Evaluating sustainability is a complex issue that lacks regularity and depth beyond an assessment of ‘results sustained’. This session will present lessons from the design and application of an innovative sustainability scorecard on a large scale and multi component resilience programme in Pakistan (FCDO Building Resilience and Addressing Vulnerability to Emergencies or BRAVE).
This session offers to explore how innovative concepts can strengthen evaluations, particularly how practitioners address the important assessment of programme sustainability.
We will first present the scorecard, starting with presenting the complexity and system-thinking theories in which it is grounded. The scorecard was directly informed by the core arguments presented in the UKES award winning Environmental Approach for Generational Impact (EAGI) Working Paper. Structured as a rubric with tailored progress markers, the scorecard captures progress across three levels (initially defined in the working paper): from programme objectives, through systemic change and toward generational impact.
In preparation for BRAVE’s baseline evaluation, the EAGI author and Evaluation Lead worked together to embed tailored lines of inquiry into Third-Party Monitoring (TPM) tools, beneficiary feedback surveys, and Key Informant Interview (KII) guides. Following data collection and analysis, the EAGI author (also Scorecard lead advisor on the programme) conducted an initial baseline assessment using TPM and evaluation evidence. This baseline provided an early view of how sustainability considerations have been integrated into programme design and delivery. In future evaluations, the BRAVE MEL team will continue to track progress against the three scorecard levels. In parallel, the Scorecard is being used to inform a robust assessment of BRAVE’s contribution to transformational change, as required under its climate finance mandate.
This session will provide an overview of the EAGI approach and demonstrate how it was adapted into a programme-relevant scorecard. Specifically, the presenters will explain what design decisions were made to align the scorecard to the needs of the programme, namely the decision to use rubrics, with programme specific progress markers integrating language from the BRAVE’s Theory of Change and the ICF KPI 15 on the likelihood of achieving transformational change. Presenters will close the presentation by sharing lessons learned from translating a conceptual framework into a theory-based, mixed-method evaluation tool focused on climate resilience.
Paper short abstract
Faith-based community-led evaluation in rural Rwanda shows how spiritual leadership and participatory methods bridge the gap between data and action, empowering families and fostering sustainable development.
Paper long abstract
Title: Bridging Faith and Evaluation: A Community-Led Approach to Action in Rural Rwanda
This presentation shares a practical model of how faith and evaluation can work hand-in-hand to drive real action in rural communities. In Rwanda, many people deeply trust faith leaders. As both a pastor and a trained evaluator, I led a community-based evaluation project that used this trust to create lasting change.
Our focus was on key development areas: family well-being, youth engagement, household income, and gender equality. We used participatory methods like focus groups, storytelling, and community mapping to collect insights. These tools respected local culture and spiritual values, making people feel safe and involved.What made this approach different is how we turned results into action. Instead of just writing reports, we shared findings in churches and community meetings. People were inspired to act. Some started savings groups, others began small businesses, and parents began peer-learning sessions to support one another. This faith-integrated evaluation model shows that involving trusted community leaders makes evaluation more impactful. It encourages ownership and faster response to challenges. It also reminds us that evaluation doesn't have to be technical or distant—it can be simple, people-centered, and transformative
We’ll share lessons, successes, and challenges, offering insights for others working in faith-based or rural contexts
Paper short abstract
Regular monitoring provides real-time insights that drive user adoption, strengthen digital system performance, and inform national scale-up decisions for biometric ID in Ghana’s routine immunization program.
Paper long abstract
Long-standing data quality challenges undermine immunisation planning, resource allocation, and public health oversight in Ghana. Manual data capture errors frequently lead to both underreporting and overreporting across districts (Piu et al., 2024). Digital health solutions such as biometric verification offer the potential to address these systemic issues by creating unique, verifiable patient records at the point of care. Yet, evidence on whether biometric systems function reliably and sustainably in real-world, low-resource conditions remains limited. This session demonstrates how a strong, continuous monitoring and evaluation (M&E) system became the primary evidence base guiding Ghana’s scale-up decisions on digital adoption through biometric verification within routine child immunisation.
The digital health record system, deployed across health posts, prompted a central question: Is the innovation ready for national scale? Instead of relying on the traditional evaluation cycle, stakeholders turned directly to our established monitoring system to answer the strategic questions shaping Ghana’s digital transformation policy. These questions focused on operational feasibility: Are Community Health Workers (CHW) consistently using the system? Does it function reliably in low-connectivity settings? Do caregivers accept biometrics? And can performance be sustained as deployment expands?
The monitoring system was intentionally designed to capture these decision-critical metrics. It relied on mixed methods to gain insights from health workers as well as the backend data from the digital system.
Over the implementation period, biometric adoption by CHWs increased substantially from 34% to 65%, demonstrating that the technology was becoming embedded in routine workflows. Complementary qualitative data supported this trend, with 91% of CHWs reporting strong motivation to use the system. The transition from fingerprint to face biometrics, triggered by early monitoring data showing low performance, proved highly effective; CHWs overwhelmingly reported that face biometrics was significantly easier and faster to use. This real-time adaptation, driven entirely by monitoring evidence, strengthened both usability and system reliability. Community acceptance remained consistently strong, with over 99% of caregivers consenting to biometric capture, an essential condition for national scale-up.
These continuous monitoring insights directly informed and accelerated scale-up decisions. Stakeholders utilised the evidence to assess operational readiness, inform training strategies, and identify resource requirements. The strength and clarity of the monitoring findings led to the system’s expansion into an additional district, with monitoring data serving as the justification for this policy decision. This multi-method and multi-source monitoring system provided the real-time assurance needed to act confidently.
For evaluators and digital transformation practitioners, this case illustrates how deliberately designed monitoring systems, tailored to decision-making needs, can transform evaluation from a retrospective exercise into a forward-looking driver of technology scale-up decisions. The Ghana experience demonstrates how evaluation can become a powerful enabler of responsible, timely, and equitable digital health scale-up, especially within the context of limited financial resource allocation.