Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Joseph Cook
(University College London)
Hannah Knox (University of Manchester)
Send message to Convenors
- Stream:
- Evidence
- Sessions:
- Monday 29 March, -
Time zone: Europe/London
Short Abstract:
This panel explores the relationship between expertise, experience and numerical data in our data-saturated world. Emerging from the question - what are the social practices behind the creation and presentation of numerical data? - we invite papers exploring data practices through ethnographic study
Long Abstract:
The World Bank and International Labour Organisation predict that within the next few years, and for the first time, over half the world's workers will be in what is broadly termed the 'service sector'. An increasing number of these will be 'knowledge workers', whose jobs involve the creation, interpretation and distribution of information. In this panel, we want to explore the social practices behind the creation and use of numerical data as 'information', whether on our climate, on our economy, within the corporation, or within the academy. Who are the people behind this data collection? How are digital technologies and new and novel metrics shaping what counts as valid knowledge, and what role does the expert or lived experience have in challenging numerical data which can often seem so certain, and so convincing? We wish to explore what ethnographies of a data-saturated world can tell us about how and why information is being transferred, debated, ignored, or blindly followed, and what role the anthropologist might play in helping navigate these paths going forward.
Accepted papers:
Session 1 Monday 29 March, 2021, -Paper short abstract:
This paper examines the expansion of big data, predictive modelling& data driven technologies into social service delivery. By drawing on the framing rhetoric around these data technologies, I show how dubious numerical expertise is justified in making decisions about marginalised populations.
Paper long abstract:
Welfare bureaucracies are undergoing a transformation with various administrative procedures that were once fully based on human inputs transitioning into data driven, automated systems. This shift is underpinned by discursive work, claiming and repurposing new streams of digital data as the basis of policy making and as valid evidence from which actionable insights can be gleaned. Justifications for the shift draw on claims of an epistemic break where large scale data linking and aggregation is seen as offering a new gold standard of knowledge and ‘a higher form of intelligence... that can generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy’ (Boyd and Crawford 2012: 663). I argue that the conceptual apparatus of ‘dataism’ from Critical Data Studies is useful to explore how such belief in the ‘objective quantification and potential tracking of all kinds of human behaviour and sociality through online media technologies’ is mobilised in social service delivery (Van Dijck 2014:206). Using case studies of the introduction of data driven decision making in sensitive social policy fields such as child protection and the pre-emptive, algorithmic targeting of social services, I empirically demonstrate the discursive slippages, conflations and rhetorical choices deployed to legitimise data science expertise within government. Novel digital proxies for ‘offline’ social activities are created using patterns from digital trace data as a stand-in for human behaviour. A fundamental problem emerges as this assumes that life processes that can’t be expressed as digital data &translated into a machine readable template don’t count.
Paper short abstract:
This paper examines the time-allocation tools employed by Finnish universities and the commensurating logic underlying them. What is accomplished by quantifying academic labour time, and why?
Paper long abstract:
Finnish universities and government research institutes have used the so-called full costing model since 2009. As part of this model, they employ time-allocation tools such as “SAP HR” and “Sole TM”, which record academic working times at very precise levels - down to one hundredth part of an hour. These tools have been found not only clumsy but also unethical by their users: discussion ensuing from a 2010 petition against the use of Sole TM at Finnish universities noted both that they cause a further strain on academics’ time, and that it is unethical for academics to be involved in the production of false data. This paper explores precisely the “fictitiousness” required when academic time is converted into “effective” or “completed” working hours. After a decade of time allocating, Finnish academics appear to have grown used to the absurdities of allocating time that grossly underrepresents the actual hours spent on academic work. But the reason behind the practice as well as its potential repercussions still remain largely undiscussed.
Paper short abstract:
This paper examines the ways in which COVID-19 redefined the relationships that experts and the members of the public have with medical data. Focusing on Irish examples it will explore how the context of the pandemic turned numbers from abstract cognitive tools into affective tenets of social lives.
Paper long abstract:
As the coronavirus started to spread in Ireland, the epidemiological data became the most sought-after information in the country. This paper will examine the ways in which COVID-19 redefined the intimacies of the relationships that health professionals and the members of the public have with medical data. It will focus on Irish examples and explore how the context of the pandemic turned numbers from abstract cognitive tools into important and affective tenets of social lives that dictated the moral values and conditions of sociality. It will examine the role of enumeration and metrics in mediating new forms of intimacy with state and society.
Paper short abstract:
How do programmers use software applications to create, process and handle the data circulating through the web today? This paper will explore the question above by looking at how a specific community of programmers conceptualizes and practices data handling through software.
Paper long abstract:
How do programmers use software applications to create, process and handle the data circulating through the web today? This paper will contribute to that question by looking into the practices of a specific community of programmers, namely Ruby programmers.
Ruby is a programming language that was created in Japan in 1993 by Yukihiro Matsumoto. Although created as a scripting language, Ruby became popular as a web development language after the creation of the Ruby on Rails. Major web companies such as AirBnb, Soundcloud, Twitte, and, notably, Github, use Ruby and Rails up to this day. In this paper, I will focus on one specific library - a small program that can be used by other programs - created in the Rails context, namely Active Record. Active Record is an Object Relational Mapper (ORM) and it provides a connection between an application and its database. Briefly, an ORM allows a programmer to easily manipulate data by creating, editing, or destroying database records.
Active Record is famous in the ruby community for having created a specific Ruby style of writing ruby. Some rubyists describe it as one of the most useful and beautiful ruby scripts ever created. As such, Active Record provides us with a clear and concise object in which to think through how Ruby developers handle and process the data created and received by software applications in a web environment.
* * *
The paper is based on ongoing fieldwork with Ruby programmers and therefore a work in progress.