T0214


Bridging the gap between quali and quanti: using Qualitative Comparative Analysis QCA with survey data to identify successful pathways in a large population for the Behaviour Hubs programme evaluation 
Contributor:
Barbara Befani
Send message to Contributor
Format:
Poster
Mode:
Presenting in-person
Sector:
Nonprofit / charity

Short Abstract

This presentation discusses the design of the Behaviour Hubs programme evaluation, outlining the opportunities and challenges encountered at each step. The innovative design combined Realist Evaluation and Qualitative Comparative Analysis (QCA) with a survey designed specifically for QCA analysis.

Description

This presentation discusses the design of the Behaviour Hubs programme evaluation. The Behaviour Hubs programme was launched to support schools and Multi-Academy Trusts (MATs) in improving pupil behaviour. The programme encouraged 'lead' schools and MATs with exemplary behaviour cultures to collaborate closely with 'partner' schools seeking to improve their pupil behaviour. Its objectives were to ensure that more teachers felt supported by senior leaders in managing misbehaviour, and understood and consistently applied their school's behaviour policy, ultimately leading to fewer incidents of disruptive behaviour.

The programme, which supported over 650 schools, built on centrally organised bespoke resources and a taskforce of behaviour advisers, delivering customised specialist training and networking events, open days, and encouraging the building of relationships between schools.

The evaluation aims were to: a) determine whether the programme had met its strategic objectives and achieved its projected outcomes for schools, staff, and pupils; b) understand how and why the intervention did (or did not) meet its objectives; and c) investigate the change mechanisms triggered by the programme that produced the observed outcomes and impacts, examining variation across different schools and respondent groups.

The combination of Realist Evaluation and Qualitative Comparative Analysis (QCA) was considered the most appropriate design because of its focus on change mechanisms, contextual variation, as well as ability to generalise findings to medium and large numbers of cases (the survey had responses from 105 schools from a total of 650+ participating schools).

The design was innovative because while there are relatively few examples of QCA applications to large N datasets and survey data, there are almost none in evaluation. The presentation outlines the opportunities and challenges encountered, going through each step, from model specification following exploratory case study work, to the design of a bespoke QCA survey to obtain a dataset of consistently comparable cases, through to calibration, running the QCA algorithms, and interpreting and presenting the findings.

It shows the kind of causal patterns QCA is able to discover, their fit to the impact evaluation questions, and the transparency and repeatability of analysis procedures.