Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Ben Eyre
(University of East Anglia)
Mario Schmidt (Busara)
Ben Jones (University of East Anglia)
Send message to Convenors
- Discussant:
-
Teddy Atim
(Busara Center for Behavioral Research)
- Format:
- Paper panel
- Stream:
- Methods - research, participation and practice
Short Abstract:
Evidence-based policy making is often presented as an antidote to the uncertainties of intervention in complex contexts. But different actors often have distinct aims. How do they come together? This panel looks at people and processes that make data, and their perspectives on what makes it "good."
Description:
Authoritative representation of impact is presented as essential to contemporary social and environmental interventions by governments and non-state actors. Evidence-based policy making is often presented as an antidote to the uncertainties in complex contexts. Its advocates harness multiple practices to evidence impact in order to justify their actions. Investors and philanthropists prioritise “actionable” data to inform decision making about effective intervention as well as cost-effectiveness. A data production and analysis infrastructure is essential to organising principles of sustainability and scale, and to managing them through a framework of risk. But distinct aims including scientific rigour, foundational knowledge, material reward, and capital allocation can impinge on one another. How do they come together? What do the multiple actors who make the data do? They include “local guides”, research officers and assistants, economics experts, NGOs, development agencies, investors, and politicians. This panel explores ideas about “good data” through attention to people and practices that make it work. Questions could include:
1) How and by whom is “good data” defined?
2) How are impact data produced, used, and experienced by different actors across contexts?
3) How does a focus on data quality differ, or on the contrary, help us make sense of concerns with metrics, quantification, and “audit cultures”?
4) How can we grasp these diffuse hierarchical relations that increasingly inform development practice?
5) What are there opportunities for (critical) engagement between ethnographers and those committed to evidence-based policy making?
We would also welcome other papers that explore impact data through other questions.
Accepted papers:
Session 1Paper short abstract:
Although enumerators are crucial for the success of development projects, their lived experiences remain understudied. Sharing ethnographic findings from an interdisciplinary project, we examine how the working conditions of research officers in large-scale quantitative surveys impact data quality.
Paper long abstract:
Enumerators are crucial for the success of development projects. They collect the data which is subsequently analysed by project managers and ends up informing policy decisions. Yet, how the lived experiences and working conditions of enumerators impact data quality in large-scale development surveys is understudied. Based upon ethnographic fieldwork in Kenya with experienced as well as early-career enumerators, we present preliminary results of an interdisciplinary project funded by the British Academy.
We specifically focus on ethnographic observations made by ourselves and a group of citizen ethnographers. We focus on enumerators’ motivations to deliver good work and how research officers themselves understand the relation between their working conditions and high-quality data. We will discuss how an existing but unrecognized professional ethics among enumerators creates incentives to work well when valued as an integral part of knowledge production while the same professional ethics reduces data quality when left unrecognized by other stakeholders.
Our paper thereby offers important insights into the complex, and sometimes contradictory, relations between survey work in East Africa as a transactional relation between enumerators and principal investigators and/or research organizations, on the one hand, and survey work as a source of professional pride, on the other.
Paper short abstract:
Based on our primary data collection experience in Nepal (mostly) we want to share reflections on structural constraints imposed by the current business model of donor funded multi-countries programs, which undermine claims of empowerment and transformation
Paper long abstract:
During field work in the context of qualitative research on energy, gender equality, and social
inclusion, our encounters with poor and socially stigmatised women triggered ethical and
methodological questions for us. Rather than discussing methodological details, this commentary
unpacks research process related issues we encounter in the context of research for development
(R4D). We reflect on the discrepancy between the claims about how R4D can foster transformation
and the constrains we face in conducting research and how it undermines the claimed expected outcome. We conclude that the roots of the dysfunctions along the research process primarily lie upstream, in the top-down research designs disconnected from the local realities we aim to change and business models incompatible with the demands of
quality research. The unrealistic donor expectations and conditionality - such as the need to fit within specific programs decided mostly by Global North based urban actors, imposed on research teams challenge ethical principles and are rarely discussed/debated.
Paper long abstract:
In development projects and interventions, survey work assumes that good data follows a particular track. Enumerators should be trained, should follow the survey wording and should operate under a series of protocols that meet externally set standards. Deviations from this are a sign of poor performance or pathologized as “cooking data” and undermining the overall quality of work. Drawing on a body of ethnographic work studying the lives of enumerators in Kenya and Uganda, we found a number of ways of working that did not conform to “best practice”, but which were adopted by enumerators because they were felt to elicit more truthful, honest answers from survey participants. Softening questions, where the enumerator knows a more reliable answer can be achieved through rephrasing; deciding to offer refreshments at a different point during the interview cycle to the one agreed; probing patiently for responses in a conversational manner, that might mean the enumerator does not achieve their assigned quota. These were all ways in which enumerators showed a reflexive relationship to their work, and a degree of craft. Such practices, informed as they were by inter-personal ethics, self-evaluation, and an interest in getting the right answer, are not captured in framings which assume divergences as incompetence at best, or “cooking data” at worst. Such practices, we found, were also shaped by the desire of enumerators to not “feel like a robot” as they moved from home to home.
Paper short abstract:
I bring insights from working with Refugee-Led Organizations, focusing on ethical, participatory data collection. I aim to explore how lived experiences shape impactful research, challenge power dynamics, and bridge community-driven data with evidence-based policy in humanitarian contexts
Paper long abstract:
Title: Data, Power, and Inclusion: Perspectives from Refugee-Led Research
This paper explores the production and use of “good data” in humanitarian and development contexts, focusing on the role of Refugee-Led Organizations (RLOs). By employing field researchers from refugee communities, RLOs ensure that the lived experiences of displaced individuals shape data collection and analysis, challenging traditional metrics of scientific rigor and promoting more inclusive research practices that reflect the realities of displacement.
A key aspect of this paper is the ethical engagement of research participants. Compensating interviewees through incentives acknowledges their time, effort, and potential financial costs, such as transportation or lost income. This practice underscores a commitment to equitable partnerships and enhances the integrity of data production by recognizing the contributions of community members in a tangible way.
The paper also examines how RLOs navigate complex power dynamics in producing data that is relevant, context-specific, and impactful for the communities they serve. By prioritizing community-based research, RLOs challenge traditional top-down research models, redefining concepts of accountability, power relations, and scientific rigor in data collection.
In addressing the limitations of conventional metrics, the paper explores how refugee-led research can inform more effective, contextually appropriate interventions, reshaping how evidence is used in humanitarian action and policy-making. Ultimately, it advocates for a more inclusive, ethical, and participatory approach to data generation in humanitarian and development work.
Paper short abstract:
Focusing on the nexus of third-party consultancies, digital technologies, and state bureaucracy in India, this presentation looks at how land data is captured, and the material practices involved in rendering it ‘transparent’, ‘reliable’ and ‘accurate’.
Paper long abstract:
The Digital India Land Records Modernisation Programme (DILRMP) is a state-sponsored policy intervention designed to reform land governance in India. Managed by the Ministry of Rural Development’s Department of Land Resources (DoLR), DILRMP is promoted as an instrument for creating “error-free, transparent, and tamper-proof land records” using advanced digital technologies such as artificial intelligence, machine learning, and blockchain (DoLR, 2024). Central to this initiative is the ‘accurate’ capture of land data— including boundaries, acreage, ownership, and possession—with the aim of reducing land disputes and monetising land.
Alongside bureaucrats and government functionaries, key actors involved in mapping land and capturing land data are third-party non-state actors— specifically small private tech start-ups and e-governance consultancies. These actors not only partake in the production of land data but also, in many cases, define what constitutes "good", "transparent", and "accurate" data. Drawing on preliminary fieldwork, policy reports, newspaper articles, and grey literature, this presentation explores the material practices that undergird the process of data collection and its subsequent transformation into "reliable" data, how and in what ways third-party consultancies capture land data, how these consultancies liaise with state bureaucracies, how they streamline digital and analogue data, and finally, how they render DILRMP implementable. Ultimately, this paper dwells on what goes into the production of “transparent” and “reliable” data, and how multiple actors negotiate the process of data production.
Paper short abstract:
This paper examines how graduate programs in development studies prepare practitioners to engage with diverse data. By analyzing programs across different contexts, it explores training in qualitative methods, ethnography, and fieldwork to address gaps critical for inclusive policy-making.
Paper long abstract:
During the transition from the MDGs to the post-2015 agenda, there was growing recognition that measuring what we treasure is more important than treasuring what we measure. For instance, in addressing gender equality, the MDGs relied on a limited set of indicators, such as the ratio of girls in education, women’s employment, and women’s representation in parliament. In contrast, the SDGs introduced a more comprehensive approach, establishing a stand-alone goal for gender equality (SDG 5) and emphasizing the collection of sex-disaggregated data across multiple goals, reflecting the cross-cutting nature of gender issues. However, in addressing complex development challenges, data collection often privileges quantitative methods, supported by standardized tools, as the benchmark for measuring progress. While such data is critical for tracking trends, insights gathered through on-the-ground interactions are frequently dismissed as anecdotal, raising a critical question: how are development practitioners trained to engage with diverse forms of data? This paper examines how graduate programs in development studies prepare future practitioners to handle multiple forms of evidence. By comparing programs in Korea, a newer OECD DAC member, with those in traditional donor countries like the UK, it explores the availability of training in qualitative methods, ethnography, and fieldwork preparation. The analysis aims to assess gaps in key skills such as conducting interviews, managing community relationships, and triangulating evidence. The paper argues that addressing these gaps is crucial for equipping future practitioners to produce “good data” that reflects the complex realities of sustainable development and supports more effective, inclusive policy-making.
Paper short abstract:
This paper explores the limitations of quantitative data in evidence-based policymaking within development finance. I argue that integrating participatory evaluation with ethnographic methods can offer a more comprehensive view of the effects of development interventions.
Paper long abstract:
Evidence-based policymaking in development finance often leans heavily on quantitative data, which can be used to assess additionality (that the financial support provided leads to outcomes that would not have occurred otherwise) and contribution (how the financial support adds value or plays a role in achieving development outcomes). However, this data often overlooks unforeseen impacts—both positive and negative—on the communities surrounding an intervention. It's like peeling a banana: you can measure its weight and appearance, but you miss the subtle nuances of taste and texture that give it true meaning.
Through 18 interviews with evaluation directors at development finance institutions and small-scale farming entrepreneurs, I sought to uncover common views on the different types of impact information. What emerged is that participatory evaluation—where stakeholders are actively brought into the evaluation process, including in the design of metrics that apply to them—can effectively manage intervention risks. By pairing this approach with ethnographic study, a deeper understanding of people's lived experiences with development finance policies and interventions can be achieved.
The findings suggest that actively involving stakeholders in the design of metrics and evaluation processes can lead to more relevant, meaningful insights and a stronger sense of ownership over the evaluation results. It improves data quality thereby enhancing uptake and use in development policy. This approach is especially useful in community development, where local knowledge is key to assessing the true effectiveness of an intervention.
Paper short abstract:
Obtaining high-quality qualitative research data presents unique challenges in the field of international development. Addressing these requires capacity-building initiatives that equip research assistants with the skills and motivation to fully deliver the value of qualitative research.
Paper long abstract:
The importance of qualitative data for understanding complex topics and providing contextualised insights has led to increased funding and support for its use. However, in countries like Nigeria, the quality of qualitative data, namely transcripts, is often substandard. This compromises the ability to derive actionable insights and design effective solutions based on the research.
A key factor contributing to this challenge is the lack of skills, motivation, and agency among qualitative research assistants responsible for conducting interviews and group discussions. Many of these researchers lack formal training in qualitative research methodologies and are often recruited from quantitative research backgrounds without a clear understanding of the fundamental differences between the two approaches. Even when researchers understand these differences, such as the importance of effective question framing and probing, they often feel compelled to adhere strictly to the guide questions, as they believe this is the expectation of the client. This limits their ability to adapt dynamically to the flow of discussions and explore deeper insights during qualitative data collection.
To address this issue, I am offering targeted training programs that prioritise practical skill-building and critical thinking. Rather than relying on primarily lecture-based instruction, these programs emphasise hands-on practice with real-time feedback. The training also encourages researchers to engage critically with the research process, empowering them to design their own research questions and guides, carry out fieldwork they have designed, and analyse the data. This approach enhances their ability to conduct meaningful fieldwork and generate more reliable, insightful qualitative data.