T0181


Building Capacity for AI and Evaluation in Homelessness Prevention: Early Insights from Two Randomised Controlled Trials 
Contributors:
Luke Arundel (Centre for Homelessness Impact)
Ella Whelan (Centre for Homelessness Impact)
Send message to Contributors
Format:
Poster
Mode:
Presenting in-person
Sector:
Nonprofit / charity

Short Abstract

AI adoption in public services is growing fast. The homelessness sector needs capacity to both use these tools and evaluate them rigorously. We share early insights from two randomised trials testing predictive machine learning and generative AI interventions that aim to reduce homelessness.

Description

The use of AI is rapidly expanding across public services, but evidence struggles to keep pace with adoption. In homelessness services, where AI tools hold promise, we risk scaling interventions that seem promising without testing their impacts. As with other interventions, AI tools should be robustly tested to understand their impact on the outcomes we care about and identify any unintended consequences.

The Centre for Homelessness Impact is conducting two complementary randomised controlled trials, one funded through MHCLG's Test & Learn programme - the first globally to invest in robust evidence of homelessness intervention impact - and one funded through the Cabinet Office’s Evaluation Accelerator Fund:

Trial 1: Predictive machine learning for upstream prevention (4 Local Authorities, ~2,000 households)

Testing whether machine learning models can identify households at risk of homelessness, and whether proactive phone calls to at-risk households reduces homelessness. Building on promising pilots, this addresses important questions about data quality, practical applications of predictive models, and scalability across local authorities with varying levels of data maturity. This trial is part of the groundbreaking £15m Test & Learn and Systems-wide Evaluation Programme, the first of its kind in the world.

Trial 2: Generative AI for housing advice (Southwark Council, ~9,000 households)

Evaluating an AI chatbot providing personalised housing advice from trusted sources (Shelter, Citizens Advice, government guidance). Unlike general-purpose AI tools like ChatGPT, this chatbot is specifically designed to assess someone's housing situation, offer tailored advice, and draft letters to landlords or councils. The intervention addresses a crucial gap - people often don't seek help until crisis point and can find advice difficult to access - by proactively reaching out to at-risk households and offering 24/7 accessible guidance before households reach statutory thresholds. This trial is funded by the Cabinet Office's Evaluation Accelerator Fund.

This presentation offers methodological insights and implementation learning from setting up trials to evaluate the use of AI. We share learning on:

Embedding rigorous evaluation in fast-moving tech contexts: Pre-registration protocols, ethical oversight, and adaptive designs that balance flexibility and methodological rigour.

Navigating data governance: Practical lessons from data-sharing agreements and concerns around algorithmic decision-making across multiple partners and local authorities.

Building organisational capacity: Understanding variation in data maturity and implications for scaling data-driven approaches.

Addressing ethical dimensions: Considering questions of algorithmic fairness and consent within the context of randomised trials.

These trials show how evaluation needs to keep pace with technology, and that successful adoption requires simultaneously building technical capacity and addressing ethical concerns. Overcoming these challenges builds strong evaluation practices that can test innovations while generating robust evidence to inform decision-making.

This directly addresses exploring "fast-emerging areas such as AI and new ways of working". By sharing learning from these groundbreaking evaluations, we support evaluators and policymakers considering: How do we test AI tools? What conditions enable adoption? How do we ensure these technologies serve vulnerable populations?

In a sector where neither AI applications nor rigorous trials are yet commonplace, these evaluations are building both the capacity and acceptance needed for evidence-based innovation.