Accepted Paper
Short Abstract
We reframe citizen science through five paradigms and discuss how AI can automate, augment, or manage citizens in each. We illustrate with a range of examples across scientific fields. We also discuss how and when AI may challenge the underlying logic of involving large crowds in the first place.
Abstract
Citizen science is often treated as a single model—volunteers contribute data to professional science—yet projects differ markedly in how tasks are structured, how coordination occurs, and what kinds of knowledge are produced. We propose a comparative framework that locates citizen science within five “crowd paradigms” (Beck et al. 2022). For each paradigm, we analyze how contemporary AI systems interact with core mechanisms of volunteer contribution. Rather than offering design prescriptions, we develop an explanatory account of when and why AI may augment human effort (e.g., pre-screening images, flagging anomalies), automate citizens' tasks, or manage projects (e.g., routing tasks, estimating reliability, shaping incentives). Going beyond the automation of individual citizen tasks, we also discuss how AI may challenge the overall logic of involving crowds in the first place.
The framework clarifies cross-cutting issues—independence and diversity of judgments, error structures and bias propagation, validity and provenance, and the social meanings of participation—that are often discussed piecemeal. It helps interpret observed successes and failures across domains such as ecology, astronomy, health, and mapping by linking outcomes to underlying mechanisms. We highlight boundary conditions where AI is most complementary to volunteers (e.g., when human local knowledge or tacit skills matter) and where substitution risks are highest (e.g., routine perception tasks at scale). We also surface implications for inclusion, motivation, and ethics.
Our contribution is a parsimonious map that integrates heterogeneous citizen-science practices and situates AI within them. This map can organize empirical findings, suggest comparable measures across projects, and guide cumulative research on the joint roles of human crowds and AI in knowledge production—without presuming any single “best” design.
From practice to pattern: Using organization and management research to advance citizen science