- Contributors:
-
Florence Randari
(Mercy Corps)
Sydney Stevenson (Mercy Corps)
Send message to Contributors
- Format:
- Poster
- Mode:
- Presenting in-person
- Sector:
- Nonprofit / charity
Short Abstract
How do teams become places where evidence is valued and used? Drawing on Pause & Reflect practice across 20+ humanitarian and development programs, this session shares practical lessons on the behaviours, conditions, and relationships that build real evaluation cultures.
Description
What truly enables teams to value and use evidence in their everyday decisions? In fast-paced humanitarian and development settings, MEL systems often generate data, yet the cultural, relational, and organisational conditions required for evidence use are far less understood. This session offers practice-based insights from Mercy Corps’ experience designing and facilitating structured Pause & Reflect processes across more than twenty programs in Africa, the Middle East, and Asia—spanning emergency food security, cash assistance, protection, resilience, and market systems development.
Rather than framing evidence use as a technical gap, this work positions it as a cultural one. Through cross-functional reflection sessions—supported by learning questions, participatory dialogue, consolidated data sets, and SOAR analysis—teams begin to establish the norms, habits, and relationships that allow evidence to inform everyday decisions. While the USAID-funded Pause & Reflect toolkit provides a helpful structure, this session focuses on what enables the approach to work rather than on the tool itself.
Three insights consistently emerge across humanitarian and development programmes.
First, evidence is used when teams have protected spaces for sensemaking. Staff in emergency responses often move from one urgent priority to the next, with little room to interpret data collectively. When teams pause—away from immediate delivery pressures—they can identify trends, challenge assumptions, reflect on participant feedback, and generate shared interpretations. This strengthens both learning and decision ownership.
Second, evidence use increases when power dynamics are intentionally disrupted. In many teams, hierarchical routines shape whose interpretation is accepted and whose evidence counts. Creating inclusive, participatory spaces where diverse staff voices, local partners, and community insights are elevated proved essential. This redistribution of interpretive authority strengthens localisation and builds environments where evidence is collectively valued.
Third, evidence becomes actionable when learning is tied to feasible adaptation. Teams engaged more deeply with evidence when reflections led to clear next steps—adjusting transfer values, refining accountability mechanisms, improving market monitoring tools, or strengthening targeting approaches. Learning that remains abstract rarely shifts behaviour; learning that leads to adaptation does.
The session will also highlight challenges: building psychological safety in politically sensitive environments, addressing imperfect or fragmented data, sustaining learning amidst staff turnover, and balancing structured reflection with delivery demands. Examples will illustrate how similar enabling conditions—shared purpose, inclusive dialogue, and structured reflection—support evidence use across both humanitarian and development contexts.
Participants will leave with a nuanced understanding of what helps create environments where evidence is genuinely valued: collective reflection rituals, inclusive sensemaking, reduced hierarchy in evidence interpretation, and practical links to action. This session offers evaluators, practitioners, and programme leaders insights for embedding evaluation into everyday work, regardless of context.