- Contributors:
-
Steve Powell
(Causal Map Ltd)
Alastair Spray (INTRAC)
Dena Lomofsky (Southern Hemisphere)
Send message to Contributors
- Format:
- Poster
- Mode:
- Presenting in-person
- Sector:
- Private sector / Commercial
Short Abstract
We present two evaluation case studies which show how AI‑assisted causal coding can turn large volumes of interviews and reports into theory-driven or theory-free causal visuals with traceable evidence. We share workflows, accuracy checks and design choices to make the maps useful in evaluations.
Description
Evaluators often struggle to process and communicate thick qualitative evidence quickly and convincingly. Causal mapping offers a concise visual language - outcomes, drivers and intermediate steps linked into a causal map with supporting quotes - but building reliable causal maps at scale used to require weeks of manual coding. This talk shows how AI‑assisted coding accelerates coding and synthesis while making sure that every visual element is traceable to verbatim text in context.
We present two recent evaluations:
Case A – Using causal mapping and contribution analysis for the final evaluation of a large multi-country programme (Dena). We will explain how hundreds of interview transcripts and internal reports were uploaded for causal coding with a “verifiable AI” technique, and how the causal mapping helped to feed into the Contribution Analysis step.
We address two challenges
How to agree on a vocabulary for the common causal elements across the program components: what to do if the language in the Theory of Change itself is ambiguous, and terminology varies across contexts? We will explain how analysts validated suggestions and managed the merging of terms (e.g., “coalition‑building”/“alliance work”).
How to narrate the story of change that the maps are showing in an accessible way.
Case B – Making more sense of masses of Outcome Harvest data (Alastair).
This was a multi-country, multi-year project, with large amounts of outcome harvest data from 692 individual sources.
Both evaluations involved highly sensitive data. Partners were understandably concerned about automated processing procedures. In this case, we were able to get the approval even of some partners who started off pretty hesitant, mainly through the use of automated offline anonymisation of data before further processing.
Even more than Case A, this was a very complex programme with many partners, each country having its own Theory of Change (ToC) as well as a global programme ToC with various different outcomes for different countries, programmes and donors, plus learning questions and hypotheses that the client all wanted to check against the data. They were finding it hard to grasp the big picture. Using causal mapping enabled them to triangulate the other methods the team were using to evaluate the programme, and clearly articulate causal chains at various different levels. There was very good feedback from the client.
Across both cases we will demonstrate: (1) a reproducible workflow from corpus → verifiable coding by AI → iterative refinement of labels → application of standard algorithms to answer evaluation questions → maps/tables; (2) validation; (3) supervised use of AI to create accessible text summaries of the data contained in the maps.
Why Theme 3? Because the product is not the map or the algorithm - the aim is to strengthen shared understanding. Visuals with transparent provenance (every node/link opens the quotes behind it), are intended to help communicate complex findings in a way that promotes discussion. We will close with a compact checklist of do’s and don’ts to help others pilot AI‑assisted causal mapping responsibly.