T0253


From Verification to Value: Making Third-Party Monitoring Evidence Evaluation-Ready (Lessons from Myanmar) 
Contributor:
Carroll Patterson
Send message to Contributor
Format:
Poster
Mode:
Presenting in-person
Sector:
Private sector / Commercial

Short Abstract

How can evaluators use third-party monitoring (TPM) evidence without treating this as “just M&E”? I argue TPM deserves its own evidence space distinct from Evaluation and M&E data. My talk draws on humanitarian TPM in Myanmar to offer shared terms and practical design tips for ethical, credible use.

Description

Access constraints, remote management, and duty-of-care risks have made third-party monitoring (TPM) a defining feature of development and humanitarian delivery in many contexts. Yet evaluators often inherit TPM datasets late in the cycle, misread verification of delivery, quality and use as routine monitoring, or discount TPM evidence because its findings, methods, and governance do not map neatly onto evaluation practice.

This presentation argues that Evaluation, Monitoring & Evaluation (M&E), and TPM overlap—but none is a subset of the others. Each is shaped by different purposes and incentives: While evaluators are commissioned to make defensible claims about merit, worth, and contribution, M&E systems prioritise performance reporting, even as TPM is organised around independent verification, risk management, and operational accountability. When Evaluators treat TPM as “just M&E”, they miss TPMs distinctive evidentiary value, which puts their findings at risk of either overconfidence in reporting performance (e.g., treating verification as impact) or dismiss evidence-based scepticism in results (e.g., discarding useful findings).

Crucially, any evaluator of humanitarian and development assistance will eventually confront diversion and fraud, waste and abuse (FWA). These are not rare exceptions; they are predictable risks. Rather than treating FWA as taboo or as “audit-only” topics, this talk offers practical, ethical ways to detect, test, and communicate possible diversion/FWA without turning evaluation into an investigation or putting people at risk: asking questions and making observations that identify warning signs without prompting accusations; cross-checking claims across sources (including patterns in micro-narratives); being clear about where the data came from and how it was handled; agreeing in advance what counts as a serious concern; and reporting uncertainty calmly and proportionately.

Using a case example of TPM of humanitarian assistance across Myanmar, I show how embracing the TPM paradigm opens pathways of inquiry that conventional evaluation designs underuse: (1) evidence about implementation fidelity, who was “reached” and who was not; (2) deliberate use of “negative evidence” (non-delivery, substitution, or obstruction) to test causal tests; and (3) fast feedback loops that can inform decisions before a final report. The case example draws upon short, structured stories from participants (micro-narratives, in the spirit of SenseMaker) to complement checklists and numbers, and to help explain context and unintended effects. Used well, these stories strengthen cross-checking across sources and help surface issues that people may not name directly.

The talk sits squarely within Theme 2: how evaluation can be embedded into everyday decision-making, learning, and delivery; what helps create environments where evidence is valued and used; and how ethical considerations and power dynamics shape whose voices are heard. I address practical safeguards for voice, safety, and bias when “independence” is contractual and access is uneven, including managing gatekeeper influence and being transparent about who collects, controls, and interprets the information.

I conclude with a shared vocabulary, e.g., verification, validation, triangulation, fidelity, reach, risk signals, and evaluative claims, to make TPM data more interpretable, more comparable across time/areas, easier to use responsibly, and better support evaluative reasoning.