- Contributor:
-
Charlie Ellis
(ImpactEd Group)
Send message to Contributor
- Format:
- Poster
- Mode:
- Presenting in-person
- Sector:
- Private sector / Commercial
Short Abstract
Most evaluations end with a report. The best evaluations end with action. Understanding Attendance worked with 400 schools and 300,000 pupils to discover what bridges that gap. This session shares four transferable strategies for designing evaluations where action is built in from the start.
Description
Most evaluations end with a report. The best evaluations end with action. This presentation explores what sits in between—the deliberate infrastructure needed to turn findings into change at scale.
When almost 20% of pupils in England are persistently absent, schools desperately need insights they can act on—not just more data to monitor. This presentation shares lessons from Understanding Attendance, a national action research project led by ImpactEd Evaluation spanning over 400 schools,10,000 parents, 300,000 pupils and three academic years, demonstrating how evaluation can be designed for action from the outset.
Traditional attendance evaluation tracks and compares absence rates between schools or pupil groups. Understanding Attendance took a different approach: what if we evaluated social, emotional and behavioural factors schools can actually influence? Utilising existing validated measures, our own data and learning, and working with school leaders, we designed a diagnostic exploring sense of belonging, relationships, attitudes towards attendance, and practicalities such as routine and sleep, alongside attendance data. The question wasn't just "who's absent?" but "what's driving absence for pupils in your specific context, and what can you do about it?"
This presentation shares four critical innovations that helped bridge evaluation into action, with practical implications for evaluators across sectors.
First: Make benchmarking meaningful. Our initial national benchmarks seemed helpful, but context-sensitive comparison—by time of year, pupil characteristics, and attendance distribution—dramatically increased actionability. Evaluators will explore how granular benchmarking makes comparative data genuinely relevant.
Second: Align timings of findings and decision-making. Rather than end-of-year reports, we built iterative data windows aligned with schools' natural planning cycles with automated reporting enabling quick turnaround. Autumn insights inform spring interventions; summer data shapes next year's strategy. Building stakeholder decision cycles into evaluation design from the start increases genuine use of findings.
Third: Create spaces for peer learning, not just individual reports. Our work at Trust level, as well as half-termly community webinars and research insight sessions bring schools together to explore emerging findings, hear from sector speakers, and discuss challenges with peers. When one school shares how they're building belonging, fifty others gain practical ideas. Evaluators can play a convening role, not just a reporting role—creating communities where stakeholders learn together rather than reading alone.
Fourth: Differentiate insights for different users. Senior leaders need strategic overview; attendance leads need diagnostic detail; SENDCos and PP-leads need subgroup-specific benchmarking; classroom teachers need pupil-level insights. Hear how we created layered reporting for multiple audiences from a single dataset, ensuring findings reach beyond the commissioning stakeholder.
The presentation also addresses important tensions: balancing rigorous methodology with accessible reporting for non-technical audiences; supporting individual schools while maintaining research integrity across the cohort; and sustaining engagement when findings reveal uncomfortable truths about systemic barriers schools cannot easily address.
Attendees will leave with practical strategies for designing evaluations that bridge insight to action, whether working with schools, charities, government, or other complex environments. The presentation draws on case studies showing successes - interventions directly shaped by diagnostic findings - and honest reflections on where the evaluation-to-action bridge still needs strengthening.