- Contributors:
-
Niki Wood
(Integrity)
Jo Down (Accadian Ltd)
Send message to Contributors
- Format:
- Poster
- Mode:
- Presenting in-person
- Sector:
- Private sector / Commercial
Short Abstract
Evaluating a mega-portfolio of interventions? We’ve been there. We unpack how we tackled ill-suited criteria, abstraction, data access hurdles, and non-evaluator audiences from our work evaluating cyber portfolios, sparking debate on what credible evaluation really means at the mega-portfolio scale.
Description
We are delivering a Portfolio Monitoring, Evaluation, and Learning programme focused on a portfolio of cyber interventions. This 'portfolio' is in fact a portfolio in name only, as it houses multiple sub-portfolios, all of which have multiple programmes that house projects. This is challenging evaluatively, as our role involves portfolio-level evaluations and reviews. These pieces of work must cut across a wide range of interventions, delivery bodies, and actors, all operating at different levels of society.
Delivering useful and actionable evaluation at this level of abstraction (i.e. cross-portfolio) is difficult, and cyber and security sector programming bring acute information access restrictions. Furthermore, we face the barrier of security-sector evaluation commissioners often not having any MEL or evaluation background, and requiring very different evaluation products and decision-making support to translate evaluation into action.
In this session we aim to share our experience and spark discussion with others evaluating mega-portfolios or otherwise evaluating the security sector. We will focus on two evaluative reviews we recently conducted focused on coherence and on Gender Equality and Social Inclusion (GESI). We will speak to how we created analytical frameworks and evaluative practice that was practical, defendable, and useful despite operating at a mega-portfolio level. We will also speak to how we delivered these, and created useful and actionable findings to overcome the above stated barriers. We hope to show how these solutions might translate into others’ contexts.
In running this session, we will outline our context, the barriers, and how we overcame them. We then wish to spark discussion with the audience on important questions facing evaluators in our position:
o Can and should government take an OECD-DAC approach to evaluation and reviews in these thematics, or when operating at a mega-portfolio level?
o How do we defensibly but flexibly look into assessing security sector topics at a portfolio-level without over-engineering new criteria that face the same problems?
o How do we define evidence, success, and credibility in these types of reviews and evaluations that operate at a mega-portfolio level? Do you think we got it right?
o How do you evaluate for non-evaluation clients given the above questions?
Relevance to the theme: this is relevant to ‘building evaluation cultures’ as our journey focuses not just on methods and approaches alone, but how evaluative practice is designed to a unique programming culture. We speak to how we created something useful and were a part of fostering a culture of commissioning, participating in, and using evaluation (as well as what went less well here).