Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Ella Haruna
(University of Wolverhampton, Centre for International Development and Training)
Rachel Slater (University of Wolverhampton)
Send message to Convenors
- Formats:
- Papers
- Stream:
- Impactful development?
- Location:
- Berrill Theatre
- Sessions:
- Friday 21 June, -, -
Time zone: Europe/London
Short Abstract:
Expenditures on capacity strengthening in international development are substantial yet the methods for evaluating the impacts of these investments are comparatively rudimentary. Papers are welcomed that explore new approaches in measuring the impacts of capacity strengthening projects/programmes.
Long Abstract:
Capacity strengthening features in the programming of a broad range of development actors, from governments, to aid agencies to NGOs, and across a wide share of sectors. Moreover capacity strengthening activities represent a substantial share of international development spending, with Denney et al 2017 estimating that may equate to a quarter of all aid. However, as the burden on international development programmes and projects to demonstrate results has grown, and evaluation methods have become increasingly robust, assessments of the impact of capacity strengthening have struggled to keep up. The methods for evaluating the impacts of capacity strengthening are rudimentary relative to other types of development spend. While there are major challenges to establishing control groups and counterfactuals in impact assessments of capacity strengthening activities, there has been substantial growth in other methods for understanding impact where control groups are not achievable - including, for example, outcome mapping techniques, theory-based evaluation and greater focus on the politics and power relations that underpin the success and failure of programmes. This panel welcomes presentations and contributions from both researchers and practitioners that detail experiences developing and using robust and innovative ways to measure the impacts of capacity strengthening projects and programmes.
Accepted papers:
Session 1 Friday 21 June, 2019, -Paper short abstract:
This paper problematizes donor demand for demonstration of results of international development training programmes alongside the near universal acceptance of capacity development activities; exploring challenges of assessing capacity strengthening outcomes from a practitioner perspective.
Paper long abstract:
The University Centre where the author works has for 45 years embraced the notion that Capacity Development (CD) lies at the heart of all sustainable International Development. In current development agendas almost every international development project claims to build, develop or strengthen capacity. At the same time, measuring and attributing the long-term impacts of short to medium-term CD activity and evidencing the return on investment pose a challenge for practitioners. This paper asks whether the combination of methodological challenges in CD evaluation and the universal championing of CD projects in ID undermines or enhances more effective capacity strengthening. It draws on the author's experience delivering capacity strengthening projects in multiple countries with multiple agencies, particularly an externally funded short-term training programme delivered across 19 countries in the Caribbean. This preliminary research explores the opportunities and constraints to measure and demonstrate results in training-focused capacity strengthening from a practitioner perspective and assesses how far donor attitudes towards CD influence the 'burden of proof' on investments in training.
Paper short abstract:
The paper reflects on two experiences of using the Qualitative Impact Protocol to evaluate health sector capacity building programmes. It then turns to opportunities and constraints to building local capacity to commission and conduct credible impact evaluations of externally sponsored programmes.
Paper long abstract:
The Qualitative Impact Protocol (QuIP) has been designed and commercially tested to collect, code and synthesise rich narrative feedback from the intended beneficiaries of multi-faceted development interventions in complex contexts. This paper is grounded in two experiences of using it to assess capacity-building projects in the health sector. The first generated evidence on how international volunteer health educators affected students' learning experiences in medical and nursing colleges in Malawi, Tanzania and Uganda. The second focused on graduate midwives' experiences of transitioning from study to professional practice in Uganda. The paper partly reflects on technical methodological lessons learnt from these and other experiences with the QuIP, but more importantly on its ambiguous potential to foster both political deliberation and legitimation of existing stakeholder power relations within capacity building programmes. Moving beyond these specific case studies, the paper reflects on opportunities and constraints to strengthening within-country circuits of accountability for the performance of internationally sponsored capacity building projects. This is partly about supply: how to strengthen locally-rooted impact evaluation research, consulting and commissioning capacity. More important, however, are questions about governance and the readiness of international donors to support within-country evaluation systems and cultures.
Paper short abstract:
Defining poverty situations and capacity strengthening as "emergent patterns of interaction" offers an alternative explanation to that of results chain thinking on how social change arises. This has important practical implications for identifying and measuring indicators of change.
Paper long abstract:
Evaluations of capacity strengthening initiatives are typically designed around a program's logical framework and theory of change. However, the extent to which these approaches provide an explanation for how social change happens is controversial. Supporters and opponents of these approaches provide equally compelling arguments and evidence to corroborate their views. This controversy presents serious challenges for designing robust evaluation methodologies. One such challenge is how to select appropriate indicators of change within a context and discourse where the question of how social change even happens is hugely contested.
This paper presents a brief critique of the Logical Framework Approach and Theory of Change Approach to shed some light and clarity on the debate. An alternative explanation for change is offered based on the social interpretation of complexity sciences, adapted by the author to an international development context. Human social phenomenon such as markets, health services, businesses, governments and capacity strengthening processes are defined as emergent patterns of interaction between people. The paper goes on to explore how to select indicators of change based on this understanding of social phenomena and how they change. Concepts and practical applications of the approach are illustrated using an NGO's micro-credit program for owners of small businesses in Sierra Leone. The paper highlights the crucial and central role played by those who co-create businesses in identifying indicators of change. Implications of viewing social change as emergent patterns of interaction as compared to the dominant discourse rooted in results-based management are discussed.
Paper short abstract:
Capacity development interventions need to work at individual, organisational and institutional level to be sustainable. Evaluations need to assess outcomes at all levels, interactions between them, and external factors to assess the contribution of the project. Performance stories can do this.
Paper long abstract:
Current concepts and frameworks for capacity development all recognize that effective capacity development requires much more than simply providing training to individuals. OECD DAC defined capacity development as "the process whereby people, organisations and society as a whole unleash, strengthen, create, adapt and maintain capacity over time". This implies the need to work at individual, organizational and institutional level to achieve sustainable capacity development. This is elaborated explicitly in UNDPs Primer on Capacity Development (2009) which extends to far more than simply the ability of an individual or organization to do what they do. The European Centre for Development Policy Management in their Capacity, Change and Performance Study Report (2008) identified 5 wide-ranging capabilities of which only one focused on this: capability to carry out technical, service delivery and logistical tasks. The other four were capability: to commit and engage, to adapt and self-renew, to balance diversity and coherence, to relate and interact. There are a similarly huge range of approaches and activities which can contribute to capacity development. Evaluating capacity development initiatives need to assess outcomes along multiple dimensions at many level, the interactions between them, and the external factors which might also have influenced the outcomes to be able to assess causality and contribution. This paper will present a participatory collaborative approach used in evaluations of the global Think Tank Initiative and the Indonesia Knowledge Sector Initiative, and how these principles are applied in INASP's emerging approach to learning and capacity development for research organisations in developing countries.
Paper short abstract:
This paper presents learning from theory-based evaluations of DFID programmes aiming to build capacity to tackle crime and corruption in the Caribbean. It showcases lessons for anti-corruption programming, learning which is drawn from tools designed to measure capacity building for anti-corruption.
Paper long abstract:
Efforts to evaluate the impact of anti-corruption programmes face numerous difficulties related to the complexity and hidden nature of corruption, its political sensitivity, and the ability of corrupt networks to adapt so as to evade interventions. This can mean that it is difficult to measure changes in corruption levels as well as problematic to attribute them to interventions. This paper draws on learning from two evaluations of DFID programmes in the Caribbean to demonstrate how anti-corruption theory is being translated into law enforcement practice, e.g., assessing economic models of criminals as rational actors whose behaviour can be changed through incentives and disincentives, and social norms models which argue that changing behaviour requires deeper social change that resonates with local norms.
The paper shows how a theory-based evaluation approach - building on academic research about what works in anti-corruption - can be used to test assumptions and construct a more nuanced theory underpinning an intervention to tackle corruption. For example, our network analysis tool builds on research about the importance of social networks among law enforcement professionals, while our assessment of organisational leadership relates to an understanding and need to change norms as well as incentives. The paper introduces purpose-built tools that underpin analysis of capacity building, including theory of change, capacity assessment for anti-corruption agencies, network analysis and policy trackers, and demonstrates their value in this challenging context. The paper shares learning applicable to other programmes that tackle corruption by supporting law enforcement institutions and highlight the benefits of a partnership-based approach.
Paper short abstract:
Linking to ongoing work with a large international development organisation, this paper puts forward and critiques a capability strengthening evaluation framework for a 70:20:10 organisational learning programme that seeks to improve development effectiveness.
Paper long abstract:
Within capacity development in international agencies the expectation, increasingly, is for organisational learning to take place through blended learning approaches. On such approach is a combination of 10% face-to-face training, 20% self-directed online learning and 70% "on the job" learning. Assessments of whether and how this popular 70:20:10 competency model works in international development organisations and whether it contributes to enhanced development effectiveness are, at best, embryonic, with researchers grappling to find robust measures not only of knowledge but also of skills and attitudes.
This paper reflects on the early experiences of developing an evaluation system for capability strengthening using a 70:20:10 blended approach in the UK Department for International Development. From a technical point of view it considers what metrics might enable a better understanding of whether and why non-linear learning pathways work, what it takes to commit to (blended) learning, and the shared responsibility between learner, organisation and educator. It outlines how we are attempting to measure learning outcomes in informal (experiential), social and formal situations and reflects on the challenges associated with evaluating what the impact these learning outcomes have on development effectiveness more broadly.