T0178


Redefining Evaluation Success: Introducing the CUTE Framework (Credibility, Usefulness, Timeliness, Efficiency) 
Author:
Arnaud Vaganay (Arnaud Vaganay)
Send message to Author
Format:
Single slot (20 min) presentation
Mode:
Presenting in-person
Sector:
Nonprofit / charity

Short Abstract

A review of 50 frameworks shows evaluation success is undefined and conflated with quality. The CUTE Framework – Usefulness, Credibility, Timeliness, Efficiency – offers a practical, project-based model to improve evaluation value and impact.

Description

Evaluation practice in the UK and internationally has expanded dramatically in recent years, with governments and other funders commissioning thousands of studies each year. Yet a fundamental question remains surprisingly unaddressed: what does it mean for an evaluation to be successful? While evaluation theories, standards, and institutional guidance discuss quality, ethics, participation, relevance and value for money, they rarely articulate a clear, operational or shared definition of success. This conceptual gap creates practical challenges for evaluators and commissioners alike – generating misaligned expectations, unrealistic scoping, inefficient delivery, and ultimately, underuse of evaluation findings.

This presentation draws on two papers. The first paper offers the first systematic analysis of how evaluation success is conceptualised across the field, drawing on a structured content analysis of fifty evaluation frameworks from governments, multilateral organisations, philanthropic funders, and professional associations. Using consistent extractive queries, the study examined whether and how these frameworks define success, how they address constraints, and what operational guidance they provide for managing real-world evaluation challenges.

Three findings stand out. First, ‘success’ is almost never defined. Frameworks articulate what evaluation is and what quality standards it should meet, but they do not specify what it means for an evaluation to have succeeded. Second, success is implicitly treated as synonymous with quality, with methodological credibility elevated above all other dimensions – even when timeliness, usefulness and proportionality are crucial for decision-making. Third, frameworks lack project-management logic: they do not acknowledge trade-offs between scope, quality, time and resources; they treat evaluations as static rather than adaptive undertakings; and they provide limited guidance for managing change, risk, uncertainty, or evolving stakeholder needs. These patterns hold across governments, multilaterals, philanthropies and professional bodies, though with variation in emphasis.

Drawing on project management theory, decision science, and performance management literatures, the paper argues that evaluation should be understood as a form of professional project work: temporary, structured, and delivered under constraints. Success must therefore be defined and managed through an integrated architecture – not a list of principles.

The second paper introduces the CUTE Framework to address this gap. CUTE is a practical and multidimensional model for defining, delivering and evaluating evaluation success across four domains:

(1) Usefulness – clarity about who the evaluation is for, what decisions it will inform, and how evaluation questions link to specific uses.

(2) Credibility – explicit, proportionate and agreed standards for methodological quality, ethics, risk of bias, data protection, and interpretation.

(3) Timeliness – alignment of milestones and reporting with real decision windows, and mechanisms for adapting when timelines shift.

(4) Efficiency – appropriate resourcing (financial, human, technical), proportionality to programme scale and complexity, and to partners’ capacity and learning needs.

CUTE provides operational tools for the design, delivery, review and learning stages of evaluation. By integrating usefulness, credibility, timeliness and efficiency into a coherent model, it offers a new standard for evaluation success – one better aligned with real-world constraints and the needs of UK evaluators, commissioners, and decision-makers.