Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Evelyn Ruppert
(Goldsmiths, University of London)
Daniel Neyland (Bristol Digital Futures Institute)
Jennifer Gabrys (University of Cambridge)
Send message to Convenors
- Stream:
- Tracks
- Location:
- 125
- Sessions:
- Thursday 1 September, -, -, -, Friday 2 September, -, -
Time zone: Europe/Madrid
Short Abstract:
While techniques of measurement and monitoring and the evidence they generate are often the monopoly of dominant institutions, this track will address the ways in which they are being challenged and reshaped by the distributed practices of numerous new actors and collectives.
Long Abstract:
Across multiple domains of interest to science and technology studies, including the formation of markets, the study of environments and the making of governable populations, measurement and monitoring techniques have become key for assessing "problems" and demonstrating the effectiveness of "solutions." While techniques of measurement and monitoring and the evidence they generate are often the monopoly of dominant institutions, they are being challenged and remade by the practices of numerous new actors and collectives. This track will address the ways in which new collective and distributed practices are reshaping techniques of measurement and monitoring and the formation of evidence. The track invites presentations that engage with questions including: How do different measurement techniques and instruments reshape what counts as evidence? How do new actors engaged in monitoring activities support, reshape or disrupt expert-led monitoring practices? And in an age of "big data," how does the extensive collection of data by multiple different actors transform the processes by which data becomes evidence? This track invites papers, as well as presentations in alternative formats that engage with practice-based research and alternative media. Material from this track will be selected and collected into a special theme for the new online open-access journal, Demonstrations (http://ojs.gold.ac.uk/index.php/demonstrations/index), which encourages experimental formats for engaging with science and technology studies.
SESSIONS: 4/4/4/5/4
Accepted papers:
Session 1 Thursday 1 September, 2016, -Paper short abstract:
Hospital rankings have evoked calculable spaces of measuring performance. Valuation encapsulates both quantitative representation and qualitative evaluation of the numbers presented, revealing distributed ‘goods’ of care.
Paper long abstract:
Rankings have evoked new forms of organizational competition in the healthcare arena. They are believed to render organizational performance¬ -and hence, 'healthcare quality'- both accessible and comparable, contributing to a more transparent healthcare system. In the Netherlands, where this study is located, two annual rankings are in use. One is published in a daily newspaper (Algemeen Dagblad), the other in a weekly magazine (de Elsevier). Both are based on (slightly different) sets of performance indicators, measuring and listing quality of care - thus 'putting performance into numbers' (Asdal 2011). In this paper, we examine the metrics and quantifying infrastructures of hospital performance measurement. Drawing on the ontological turn in Science & Technology Studies (Mol 2002, Woolgar and Lezaun 2013), we seek to open up the "calculating selves and calculative spaces" (Miller 1994) of hospital rankings, and study how these infrastructures evoke distributed practices of 'good care'.
Empirically we build on ethnographic work in three hospitals in the Netherlands (Bal, Quartz et al. 2013). We study how these hospitals enact hospital rankings and examine how they quantify their practices of care and make sense of them, both with regard to their own performance and in competition with others. More particularly, we study the 'calculable spaces' of measuring performance in which calculating instruments (e.g. pain scoring instruments, complication registrations) are enacted and performance is valued and accounted for. Valuation, we will argue, encapsulates both quantitative representation and qualitative evaluation of the numbers presented. These different practices reveal distributed 'goods' of care.
Paper short abstract:
Drawing on a study of the WHO’s Evidence-informed Policy Network (EVIPNet) activities in Uganda, this paper examines the contestations over who, how, and where decides what counts as legitimate knowledge to inform healthcare policy.
Paper long abstract:
'Evidence-based' or 'evidence-informed' policy has become something of a shibboleth, separating the good from the bad in healthcare decision-making and indispensable to improving health especially in what are still commonly referred to as "developing" countries.
The World Health Organization (WHO), for example, urges countries to strengthen 'knowledge translation' capacities and to align national healthcare policies with 'global' scientific evidence. As a key vehicle, WHO's Evidence-Informed-Policy-Network (EVIPNet) is conceived as a 'global, regional and country level social network' (EVIPNet 2015) and 'living laboratory' (Hamid et al. 2005) that promotes the systematic use of research evidence in policymaking.
This paper draws on the ethnographic study of WHO's Evidence-informed Policy Network (EVIPNet) activities in East Africa. Focusing on evidence making practices at EVIPNet Uganda, I explore the tensions that arise when narrow definitions of and demands for scientifically-validated evidence-for-policy - usually in the form of meta reviews of experimental research findings - rub against practitioners' desire for 'local' and context-specific evidence. But also how associated attempts to include other forms of knowledge might expand who, how and where decides what counts as legitimate evidence or not.
Paper short abstract:
While local health departments often seem to comply with national health promotion guidelines, this case-study of the Danish health promotion guidelines illuminates how the local policy workers, secretly, reshape the guideline monitoring tools to perform ‘guideline compliance’.
Paper long abstract:
Guidelines, and the related practices to monitor the compliance with them, are increasingly used as means to regulate local authorities towards certain practices. This exploratory case-study takes the Danish national health promotion guidelines as a case and shows how local policy workers do not merely implement guidelines as proposed. The case study was conducted during a year-and-a-half through on-going contact with the local policy workers in a Danish municipality's health department. Data was gathered using a variety of (in)formal qualitative methods and uncovers both official and unofficial beliefs and practices.
Great advisory and financial efforts have been expended to ensure success (understood as a high degree of implementation) of these guidelines. According to the issuing authority of the guidelines, the Danish Health and Medicine Authority (DHMA), the guidelines are a story of success that is worth extended investments and continuation. Yet, this case-study tells another story of a local implementation of the health promotion packages-a story of deliberate concealment strategies, creative practices, and different performances. Applying a dramaturgical framework, it illustrates how the local policy workers challenged and reshaped the expert-led monitoring practices used to monitor their guideline compliance to front-stage some implementation practices and back-stage others. By so doing, 'guideline compliance' became a deliberate performance made by the local policy workers rather than the result of acting in accordance with the guidelines. Forward, when studying local guideline compliance, we must also consider it as a staged performance in which deliberate techniques are used to produce, and manage certain impressions of compliance.
Paper short abstract:
This research explores the process of making a ‘certified risk’ for the brain, particularly in the area of stroke risk assessment, by examining the ways in which a national standards reference data system or brain-mapping is developed in Korea.
Paper long abstract:
The theme of health risk intersects the boundary between STS and public health policy. How is a particular health risk detected, measured, diagnosed, communicated, and controlled? What kind of opportunities and challenges do new and emerging technologies present to those who deal with health risk issues? How are these issues manifested in the local or global contexts? This paper explores the process of generating a standard model for brain health, particularly for stroke risk assessment in the context of building a national standards reference data system in Korea. It examines the ways in which a large number of health information of individuals are encoded as 'data' to create a standardized model for determining the health of the brain. In particular, this research traces the development of statistical maps for diagnosing a brain disease, White Matter Hyperintensities (WMHs), to illustrate how the concept of 'certified risk' is generated and put to use in Korean society. This case study of statistical brain anatomical maps shows, we argue, an interesting assemblage of health risk practices encompassing measurement science, computer technologies, probability statistics, and standard-making procedures. This paper thus aims to shed light on the contemporary risk governing practices by analyzing an interplay among various actors, such as researchers, policy makers, clinical practitioners who are trying to gain a certain level of objectivity, accuracy, and reliability for governing public health risks within the framework of making standards.
Paper short abstract:
Drawing on ethnographic fieldwork in statistical offices in EUrope, this paper studies the multiple practices, decisions and negotiations that are involved in the delineation of the population that is to be enumerated into being as an intelligible object of government in register-based censuses.
Paper long abstract:
Following Michel Foucault, the target of biopolitics are not territories or individual subjects but entire populations. While Foucault specifies that the emergence of this mode of government is related to new forms of knowledge - most notably statistics - he does not elaborate on how populations are constituted as intelligible objects of government. To address this question, this paper studies what I call biopolitical bordering: quantification practices which determine who is to be considered part of the population that is to be enumerated into being through censuses and other population statistics. Drawing on ethnographic fieldwork in Estonia's statistical office, I study the practices, data infrastructures and method assemblages that are involved in the delineation of populations in order to inquire how this biopolitical bordering is done. In the context of Estonia's register-based census, the determination of the so-called target population (the population to be enumerated in the census) requires a methodology that permits to decide whether a person has been resident in Estonia in the 12 months preceding the census. Therefore, statisticians are developing an algorithm that determines an individual's residency-status through the number and types of entries a person has in 20 different government registers. What my research suggests is that multiple negotiations and decisions are involved in this kind of biopolitical bordering, ranging from the challenge of accounting for the varying significance of government-registers to the question if to in- or exclude people with ambivalent 'signs of life' in the registers in the target population.
Paper short abstract:
This paper traces the contested socio-material emergence of the EU’s Smart Borders Package for biometric border control and shows how its political negotiation, technical development and practical implementation – far from depoliticising border struggles – has opened up new spaces of contestation.
Paper long abstract:
In early 2013, the European Commission proposed the creation of a Europe-wide database, which shall on the one hand biometrically register all non-EU citizens' entries to and exits from the Schengen territory and on the other hand accelerate border crossings of pre-vetted travellers. Commonly referred to as Smart Borders Package, this proposal indicates a remarkable shift in the way the EU's external borders are problematized. It partly recasts the border itself as an obstacle, but promises to reconcile security and speed by technologically upgrading it. However, in the last three years, both the proposal's political negotiations and its technical development have yielded profound controversies. In particular, technical devices and their very properties have become objects of political disputes, thus opening up new spaces of contestation and possibilities for politics and resistance.
Based on multi-sited ethnographic field work and drawing on questions raised by Foucauldian studies of governmentality and by actor-network theory, this paper traces the contested process of emergence and socio-material assembling of the European smart border. It gives a detailed insight into its on-going political negotiations, technical development and practical testing and shows how this process has fostered techno-political controversies, which nearly led to its failure. In doing so, it examines the socio-technical work of assembling and aligning heterogeneous networks of border control, the various actors involved, and the difficulties of making them function. Beyond that, it exemplifies why this development is constantly threatened to fail, but also, which effects smart borders would entail, once they were implemented.
Paper short abstract:
While practices of listing, forensics, and identification established how many people were killed in the 1995 Srebrenica genocide, they also produced very different outcomes. By articulating, juxtaposing and contrasting these practices, the paper analyzes technolegal realities after Srebrenica.
Paper long abstract:
In 1995, many Bosniaks were massacred after the 'safe haven' of Srebrenica was overrun by the Army of Republika Srpska (VRS). After they killed and buried approximately 8,000 men, the VRS unearthed victims' bodies and relocated those bodies in secondary mass-graves rendering victims' bodies into an "absent presence" (M'charek et al 2014). Proof of the biggest post-WWII genocide in Europe was initially lacking which created room for denying the atrocities. In the years after the genocide, various mechanisms to generate insight into the numbers of victims and evidence of the massacre were and have been used. These mechanisms include practices of locating mass graves and recovery of bodies therein, counting bodies and body parts, compiling and comparing missing persons lists, use of statistical models to estimate the amount of missing persons, forensic genetics to individualize remains and to identify them, and testimonies of survivors and other witnesses.
The paper addresses three such mechanisms: missing persons lists, forensic anthropology to determine the minimum number of individuals represented by human remains, and identification. Since such numbers are not only technoscientific articulations, but also materialize as legal evidence at the International Criminal Tribunal for the former Yugoslavia, the considered mechanisms and their outcomes should be considered "technolegal" practices and objects (Toom 2015). Through describing, juxtaposing and contrasting the three technolegal practices of listing, forensics, and genetic identification, I aim to articulate the various post-Srebrenica genocide realities and their politics, including those of genocide deniers.
Paper short abstract:
Drawing on ethnographic fieldwork across multiple national statistical institutes, this paper argues that rather than radically rupturing previous ways of data collection, Big Data re-raises older concerns around the question of who determines how social data is collected, analysed and disseminated.
Paper long abstract:
The emergence of enormous, interconnected, dynamic datasets, or "big data" as they have become commonly known, have started to pose a serious challenge to official statistics. Many now fear that statistical authorities will increasingly have to negotiate with the private sector for access to Big Data. Drawing on ethnographic fieldwork conducted across multiple national statistical institutes in Europe, I suggest that the challenges associated with Big Data are not entirely unlike those previously encountered with more traditional forms of statistics.
For example, since the 1960s, the Nordic welfare states have been collecting very detailed information on their citizens into various electronic registers held by different government departments and institutions. The social security number, a unique identifier for each citizen, has allowed for the information scattered across different registers to be conjoined in to an annual census on the population. Working with registers has necessitated that statistical authorities negotiate with outside actors for access to data. As NSIs move to work with Big Data, similar negotiations remain essential.
By drawing on several empirical examples, I suggest that rather than radically rupturing previous methods of data collection, Big Data in fact re-raises much older concerns around the question of who has the power to determine how social data is collected, analysed and disseminated.
Paper short abstract:
I use four definitions of the thing—as object, as assembly, as superhero, as assimilating parasite—to investigate the entanglements of algorithms and data. Based on ethnographic research at national statistical institutes, I argue that the "algorithm/data thing" reshapes what counts as evidence.
Paper long abstract:
Algorithms and data are deeply intertwined. Data are gathered and stored using algorithms, and algorithms operate on data to produce results. To move away from the technical definitions of these two concepts, and to investigate them as objects of sociological interest, I propose understanding them together using the term "algorithm/data thing". I argue that we can use four different definitions of the thing—as object, as assembly, as superhero, and as assimilating parasite—to investigate the entanglements of algorithms and data. With these definitions, I extend Geoffrey Bowker's reading of Binder et al.'s proposal to consider alternative meanings of "thing", as in the Icelandic parliamentary institution "Althing", with two additional figures from popular culture.
I apply the algorithm/data thing to preliminary findings from a collaborative ethnographic research project spanning several European National Statistical Institutes, and I demonstrate how the algorithm/data thing reshapes what counts as evidence in new official statistics methods that use big data analytics and other experimental data sources. As a site of technical practice, as a mode of governance, as a key figure in contemporary discourse, and as a mechanism of quantification, the algorithm/data thing enacts specific types of evidence at the expense of others, and it has implications for how knowledge production is organised within national statistical institutes.
Paper short abstract:
Danish pupils’ well-being is an emerging object of governance in Denmark. We explore three different measurements and techniques of calculating. All contribute to factualise ‘well-being’, but they also enact different versions of well-being, publics, and the problem-solution nexus.
Paper long abstract:
The concept of well-being has become a key category of social and political imagination, cultivating new understandings of 'what it means to be a capable person' (Corsín Jiménez, 2008, 2). In 2015, the Danish Ministry of Education began conducting national, annual measurements of Danish pupils' well-being. This measurement recasts the traditionally qualitative psycho-pedagogical concept of well-being in numerical terms. Moreover, different actors with overlapping but competing calculative techniques enters the scene. We investigate well-being as an emerging object of governance in Denmark with attention to competing techniques for measuring and calculating well-being: 1) the statistical factor and reliability calculations used by the Danish Ministry of Education, used to turn a 40 questions-questionnaire into a 'quality indicator', which again is used to hold institutions accountable to new national objectives for pupils' well-being; 2) the Danish newspaper A4's interactive, online mapping of pupils' well-being at all Danish schools, developed from the same numbers (accessed through their juridical right to access government files) but using different calculative techniques and aimed at communicating to a general public; 3) the 'Københavnerbarometer', an older well-being metrics developed by the municipality of Copenhagen for local governance of schools. We highlight how these different metrics contribute to factualise 'well-being' as evidence of the state of affairs in the public school all the while they also enact different versions of well-being, publics, and the problem-solution nexus.
Paper short abstract:
Based on the experience of the GenderTime EU-funded project, the paper will reflect on the multiple tools tested to monitor and measure gender equality. It will question their relevance and the paradoxical outcome they produce: being at the same time drawn in data and lost in terra incognita.
Paper long abstract:
From the experience gained in the GenderTime project (EU FP7 2013-2016), the paper will present the methodologies tested to monitor, to measure and to assess the outcomes of several gender equality plans (GEPs) developed in several European research institutions.
After an overview of the different strategies experienced, inspired from various fields from quality assurance to statistics and to sociological qualitative research, challenges raised by the implementation of GEP's tools will be discussed, with a focus on data and on relevance for the implementing institutions. Despite lots of positive and self-satisfying assessments, blind and good-willing implementation is not enough.
Regarding the data, we are at the same time drawn in data and lost in terra incognita. If there are lots of available reliable data regarding human ressource management, data on research activity itself is either poor, or not available, or time consuming to analyse due to the lack of common classifications and databases. Surprisingly, connections between gender equality and scientometrics are almost inexistant.
In terms of relevance, the GEPs have been implemented with a focus on STEM, because of the underrepresentation of women in those fields. The issues of equality in other disciplines are neglected or unexplored. Moreover, GEPs suppose institutions and researchers are highly motivated by ranking and competition. But this model does not correspond to most institutions and individuals.
As a conclusion, we propose some perspectives for better and more relevant data and alternative models of excellence, leading to alternative measurement of gender equality.
Paper short abstract:
This paper explores contemporary US and global struggles to quantify ‘teacher quality’ and the trials being faced by these efforts in a range of arenas, including the media and the courts. It contributes to understandings of how numbers are being made and unmade in contemporary education policy.
Paper long abstract:
Associated with improved student outcomes and trillions of dollars in GDP, 'teacher quality' has become a 'high stakes' issue, and efforts to measure and improve it are rife in contemporary policy. Focusing on 'value added measures' (VAM) in the US and using data from the media, think tank reports, academic papers and policy documents, this paper traces efforts to quantify teacher quality and the trials these efforts are facing. Conceptualising these measures as 'interesting objects' (Asdal, 2011; Gorur, 2015), it traces how they draw in a range of unexpected actors, and redistribute effort and expertise among psychometricians, testing agencies, teachers' unions, parliamentary debates and the courts, to name a few. Interestingly, even as 'value added measures' are gaining policy influence, the American Statistical Association itself has issued a cautionary note about their validity.
This paper draws on concepts from sociology of measurement (Woolgar, 1991; Gorur, 2014) and engages with current STS work on the making and unmaking of numbers (e.g., forthcoming special issue of STS, guest edited by Verran and Lippert), and draws upon, and extends, Asdal's (2011) work on 'interesting objects'. It also extends my on-going efforts to bring attention to the performative nature of measurements (Knorr-Cetina, 1999) and to deflect attention from simply debunking numbers by focusing on their productive capacities.
The paper directly relates to the Track's interest in the work of different measurement techniques in the productions of 'evidence', and the engagement of different actors and the re/distributions of 'expertise' among them through such measurements.
Paper short abstract:
This paper examines the work completed by a citizen organization so “undone science” about pollution in the Fos-sur-Mer industrial area (France) gets done. We discuss the epistemological and political qualities of participative biomonitoring data so as their ability to challenge current regulatory practices.
Paper long abstract:
The Fos-sur-mer industrial area, one of the largest of its kind in Europe, was set up in the 1960s. Steel and petrochemical plants, some classified as dangerous, were built. The impact on the local communities was dramatic. Rapidly, pollution generated protest. So, in 1971, the administration created a collegial organization to restore dialogue. Technical solutions to reduce emissions were examined. Concerns fluctuated until a waste incinerator was constructed in the 2000's. The issue was raised again. Demonstrations took place to oppose the facility. Residents pointed at the lack of knowledge about the industry impact on environment and health. Under pressure, some representatives requested a territorial check-up. They also supported citizen-based organization (IECP or Eco-citizen institute for knowing pollution) whose aim is it to develop research on the chronic effects of pollution below the threshold of regulatory norms, but also to lobby the administration so it may change its monitoring.
The objective of this paper is to examine the work completed by the IECP so "undone science" (Frickel at al. 2010) about pollution and its impacts on living organisms gets done (and is made locally relevant). Special attention is paid to participative biomonitoring experiments to document pollution accumulation. Elaborating on archive research and interviews with stakeholders, we see how relations between scientists, decision makers, industrialists and citizens evolve in the area. We also discuss the epistemological and political qualities of the data produced as well as their ability to challenge current regulatory practices.
Paper short abstract:
Citizen sensing involves monitoring and measuring environments to generate new forms of evidence. Yet how do the instruments of citizen sensing give rise to, or complicate, instrumentalist approaches to environmental citizenship?
Paper long abstract:
Citizen sensing practices are emerging that use technologies in order to monitor and measure environmental problems such as air pollution, and to generate data that could be actionable for policy and regulation. Yet does the rise of citizen sensing practices and technologies re-inscribe instrumental—or in other words potentially reductive and functional—approaches to citizenship and political engagement? Or, do these new types of instruments in the form of low-cost environmental sensors rework what could be seen as instrumentalist approaches to politics to develop new vocabularies of effect and effectiveness, and to challenge the apparently linear logic of instruments through the more entangled operations of attempting to realise environmental change?
Through a discussion of practice-based and participatory research into air pollution sensing with affected communities, this presentation will address how or whether the instruments of environmental monitoring lead to or disrupt instrumental engagements with citizenship. We outline the ways in which citizen sensing technologies are often presented in their more prototypical and beta stages of use and development. We will then compare these conceptualisations of monitoring technologies with more sustained engagements and testing that attempt to generate environmental data to effect particular types of environmental change. In the process of making new forms of evidence, through emergent technologies and environmental collectives, we suggest that the instruments of citizen sensing demonstrate how apparently instrumentalist versions of evidence-based politics can give rise to diverse and inventive citizen-based and collective practices through the very attempt to gain influence through the collection of data.
Paper short abstract:
Following the Fukushima Accident in 2011, citizen’s measurement movement of radiation gained prominence in Japan. I explore how the local groups, as a new actor, engaged in the measurement of radiation level of the air, food, water and soil, and affect to the monitoring activities of the government.
Paper long abstract:
Following the Fukushima nuclear power plant accident in March 2011, citizen's movement against radioactive contamination gained prominence over a wide area in Japan. Not only criticizing and protesting against high standard values of regulation level which government adopted provisionally, citizens also started measuring radiation level by themselves using their own devises and equipments. A lot of individuals and local groups measured air radiation level, radioactivity of the food, water and soil of their residential area. They shared their data using internet, and used the data to the negotiations with the local government. This measurement movement also emerged in the place which government and experts considered a "non-affected" area including Tokyo.
In this paper, mainly based on the interviews of and documents by citizen's groups in the Tokyo metropolitan area, I describe how local groups engaged in the measurement of radiation level of their local environment. Finally, I explore how local citizen groups, as a new actor, affect to the expert-led monitoring practices of radiation at that time.
Paper short abstract:
This paper explores what it means to measure disasters in a way that moves concerns away from the complexity of disasters over time and in specific locations, towards shareable information and communication to provide insight into how acts of knowing, measuring, and sharing are interwoven.
Paper long abstract:
Disasters are often described by their numbers, but numbers alone do not render a disaster comparable, shareable, or explainable. A 7.9 magnitude earthquake in one place can cause extreme destruction challenging any return to normality, while, in another place the same magnitude earthquake can cause mild disruptions of daily routines. Knowing that half a million people evacuated offers a sense of spatial scale, but says little of the devastation that can vary from almost everyone returning home to almost everyone with ruined homes. Yet, disasters are increasingly being made knowable via technologies that measure effects (such as satellites visualisations or text message tallies) and information systems that share data among diverse networks. Doing so requires many of these qualities to be predefined and reduced to numbers and standards. In this paper, I explore what it means to measure disasters in a way that foregrounds the ability to engage with distributed infrastructures, moving concerns away from the complexity of disasters as social, technological, and natural interactions over time and in specific locations, towards information and communication. Drawing on collaborative and ethnographic research, I take up the design of new disaster information technologies meant to produce and share data about disasters across a variety of actors from a range of disciplinary, experiential, national and cultural backgrounds. I aim to provide insight into how acts of knowing, measuring, and sharing are interwoven.
Paper short abstract:
The Worldwide Hum Map and Database operates as an agent of participatory environmental monitoring and sensing, as well as an instrument of risk assessment and communication. It aims to demonstrate not merely the Hum’s incidence and extent but also its scientific intelligibility and very reality.
Paper long abstract:
Audible only to an unhappy few, it has been heard in England, Scotland, Canada, the U.S., Germany, Denmark, Australia, New Zealand, and Japan. Often likened to the sound of a diesel engine idling in the distance, it has been described as a persistent, low-pitched droning or buzzing, a pulsing, a throbbing, a rumbling. For some hearers, it is more than a mild annoyance; it is a cause, allegedly, of pain, disease, and severe distress. Despite the efforts of scientists, engineers, and acousticians, the "Hum"—as this anomalous phenomenon is colloquially known—remains both an ontological and an epistemological enigma: a sound of uncertain origin, a noise of unknown identity, a sensation or perception of indeterminate existence.
My paper examines the Worldwide Hum Map and Database (WHMD), a website and blog that assembles crowdsourced data on the Hum's geographic locations, dates and times of occurrence, auditory qualities and intensities, and bodily effects. It also considers the design and construction, by self-motivated individuals, of mechanical and electronic "Hum detectors." As against the evidentially inconclusive investigations conducted by experts under the aegis of dominant institutions, I show how the WHMD and homemade Hum detectors operate as effective agents of participatory environmental monitoring and sensing, as well as popular instruments of risk assessment and communication. Unlike other contemporary "crisis-mapping" projects, however, these civic sociotechnical devices and practices bear an additional, and more fundamental, burden: they must demonstrate not merely the Hum's incidence and extent but also, I argue, its scientific intelligibility and very reality.
Paper short abstract:
National statistical institutes experiment with new data sources to cut costs and produce more timely statistics. I contend that, based on imaginaries of the creative and tech sectors, they reshape what counts as evidence in statistical practice and produce new professional identities.
Paper long abstract:
National statistical institutes (NSIs) are tasked with the production of rigorous and reliable data about the state of a population. Yet they are also pressured to cut costs and produce more timely statistics. Some NSIs have started to experiment with new data sources to answer these challenges. In this paper I examine experimental forms used by NSIs to test data sources such as Twitter and traffic sensors. I explore and theorise a fieldwork finding: in an environment of rigorousness, new experimental forms seem to encourage acceptance of 'imperfections' and results that are 'just good enough'. This may very well be typical of innovation and work-in-progress, but it is controversial in official statistics.
In STS the quality of evidence is generally considered to be a social achievement. Yet, 'just good enough' has received little attention as an explicit valuation. So how is 'just good enough' accomplished? And what exactly is it? I draw on ethnographic observations of experimental practices at Statistics Netherlands and other European NSIs, among which a bootcamp and an innovation lab where actors such as internet companies are invited to cooperate. Based on imaginaries of the creative and tech sectors, these formats promote 'creativity', 'agility' and 'quick results'. I contend that they reshape what counts as evidence in statistical practice and produce new professional identities (cf. Shapin and Shaffer 1989; Haraway 1997). They do not introduce radical change, however, as they incorporate existing standards, and are intertwined with ongoing cost cutting measures and disciplining managerial techniques.
Paper short abstract:
This paper addresses the call to explore collective practices to reshape measurement, monitoring and evidence by advancing the concept of data infrastructure literacy and illustrating it with examples of interventions to recompose data infrastructures from journalism, activism and social research.
Paper long abstract:
A recent report from the UN makes the case for "global data literacy" in order to realise the opportunities afforded by the "data revolution." In this context data literacy is characterised as a combination of information literacy, statistical literacy and technical skills, and reflects conceptions proposed by both practitioners and researchers working around this topic. We argue that such conceptions risk obscuring the methodological and analytical inscriptions or bias that data come with, and the particular forms of valuation which they might favour (e.g. auditorial and entrepreneurial). In response to these dominant conceptions of data literacy we advance an alternative conception of data infrastructure literacy. The conception of data infrastructure literacy that we propose draws attention to the need to account for the wider data infrastructures which create the socio-technical conditions for the creation, extraction and analysis of data. In shifting focus from datasets as raw materials to data as infrastructures we call for rethinking the action repertoires of data publics and their potential to challenge, reshape and reconfigure the composition of data infrastructures and of the techniques of measurement and monitoring inscribed in them. In order to advance this agenda, we propose a provisional framework for thinking about data infrastructure literacies and discuss a number of new collective practices and examples from activism, journalism and social research that have sought to challenge the constitution of existing data infrastructures and to reshape the techniques of measurement and monitoring central to the formation of evidence.
Paper short abstract:
This paper documents new collective practices of measurement and regimes of justification employed by patients suffering from rare conditions, treating them as prefigurations of new evidential regimes and constructs of “small data” that are likely to gain salience with the advance of personalized medicine.
Paper long abstract:
This paper concerns a collective practice of measurement that might be thought of as the obverse of big data: how patients with super rare diseases (conditions that are suggestively referred to as "orphans" and "ultra-orphans") mobilize to challenge and "talk back to" prevailing evidential regimes in biomedicine. Because these regimes regard double-blind RCTs as the evidential "gold standard", conditions which have a low incidence in the population cannot readily yield up evidential gold, but only the dross of small numbers. Absent the power of "robust evidence", orphan conditions are understudied, and what therapies have been developed tend to be exorbitantly priced (because of negligible market share), and underfunded by public bodies charged with distributing public resources equitably and in accordance with principles of "value for money". Based on ethnographic research among patient activist organizations and health technology assessment agencies, this paper will explore the networks and strategies through which patient-activists seek legitimacy and influence. Of particular interest are three questions: 1) What kinds of lateral and lay-expert alliances develop between patients with rare conditions and the pharmaceutical industry in attempting to redefine evidence, value, and fair price? 2) What repertoires of evaluation, regimes of justification, and principles of exceptionalism are mobilized to argue for access to orphan medications? 3) How might these frameworks, alliances, and data gathering practices from below prefigure shifts in evidential regimes that are likely to gain salience with the promise of personalized medicine and the genomically-aided stratification of more common diseases into ever more specific subtypes?
Paper short abstract:
Open Source Intelligence (OSINT) refers to the gathering of data from sources that are publicly available. This presentation focuses on activist examples of OSINT. It aims to understand OSINT empirically and conceptually and how these new methods are learned and shared.
Paper long abstract:
Open Source Intelligence (OSINT) refers to the gathering of data from 'open sources', sources that are publicly available. This could be open data, unintentionally leaked data, or wittingly leaked data. The term 'intelligence' is usually associated with the activities of intelligence agencies, but this paper focuses on examples in which OSINT is used for activist purposes. OSINT practices include, for example, the analysis of social media, video and maps in the context of human rights issues and the analysis of leaked datasets in the context of counter-surveillance activism. The paper aims to understand OSINT empirically and conceptually, by tracing the history of the term and distinguishing it from neighbouring terms such as forensics or other forms of evidence production. In the paper, OSINT is taken to be a form of 'data activism'. Data activism is an umbrella term that indicates grassroots mobilizations enabled but also constrained by software, which take a critical stance towards massive data collection (http://data-activism.net). OSINT, being a proactive response, mobilises new methods and techniques of data analysis. This paper brings into view how this happens (by looking at, amongst others, 'Bellingcat', a project website by and for investigative journalists) and how these new methods are learned and shared.