- Contributors:
-
Jennika Virhia
(The University of Glasgow)
Danni Anderson (University of Glasgow)
Elly Hiby (ICAM)
Nai Rui Chng (University of Glasgow)
Send message to Contributors
- Format:
- Poster
- Mode:
- Presenting in-person
- Sector:
- Academia
Short Abstract
Evaluability assessments (EAs) are tools that can help evidence impact and bridge the evaluation-action gap. We conducted four EAs with organisations undertaking dog population management (DPM) to strengthen their monitoring and evaluation capacity in order to and advocate for humane DPM globally.
Description
Background
Evaluability assessment (EA) is a quick and useful tool that can be used to support organisations facing challenges in demonstrating impact. Recent applications of EA have been used to support evaluation planning and improving monitoring and evaluation (M&E) systems (Kate Hamilton-West et al., 2019). Within the field of dog population management (DPM) numerous organisations around the world conduct passionate and intensive work to humanely manage dogs, yet they lack the necessary M&E knowledge and tools to evidence the impact of their work (Hiby et al., 2017). Animal welfare organisations such as The International Companion Animal Management Coalition (ICAM) are working to overcome these challenges via investing in research and methodological expertise to support charities and local governments carrying out DPM to increase their M&E capacity. The aim of this research was to demonstrate how EAs can bridge the evaluation-action gap by increasing M&E support to organisations carrying out DPM to better learn how to evidence their impacts. In doing so, successful case studies may be used to champion for humane DPM globally.
Methods
An M&E team comprising of a partnership between ICAM and the University of Glasgow included: evaluation scientists, DPM experts and epidemiologists with expertise in quantitative methods. The team worked collaboratively to provide direct support to a selected group of organisations implementing DPM. We conducted four EAs with organisations located in Thailand, Sri Lanka, Georgia, and India. For each organisation the EA process comprised three participatory workshops (one online, two in person) to meet with stakeholders, co-develop a theory of change, prioritise outcomes, identify key performance indicators, data availability and data needs. The process for each culminated in a clear and actionable set of M&E recommendations co-developed with the local organisation. After recommendations were identified, data experts worked intensively and collaboratively with the organisations to share, analyse and interpret data to showcase the impacts of their DPM activities.
Results
The four organisations who participated in the EA all had varying levels of M&E capacity. Three were collecting data on their DPM efforts, with basic analysis, interpretation and reporting, while one had a strong track record of publications. The M&E team were able to provide direct support to each organisation, and a bespoke plan was co-developed with each to strengthen their M&E capacity going forward. Specific actions varied across organisations and included: providing input for improving data collection tools, data cleaning, data analysis, data visualisation and interpretation, with the ultimate aim of publication of results. In some cases, the organisations adapted their practices for more effective data capture.
Conclusions
We conclude that evaluability assessments can work towards bridging the evaluation-action gap within DPM by supporting organisations to increase their M&E capacity, and in turn facilitate operational decision-making towards evidencing impact. This strengthens the evidence base for successful DPM approaches, which may be used to advocate for humane DPM globally.