Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Norma Möllers
(Queen's University)
Anne K. Krueger (Humboldt University Berlin)
Send message to Convenors
- Stream:
- Measurement, commensuration, markets and values
- Location:
- Bowland North Seminar Room 10
- Start time:
- 26 July, 2018 at
Time zone: Europe/London
- Session slots:
- 3
Short Abstract:
This panel invites research which investigates how systems of data practices not only reproduce social order but actively shape it. We specifically look for research which looks into the mechanisms of data production, the political economy of data practices, and the consequences for people's lives.
Long Abstract:
Large-scale data practices and technologies and their consequences are a pressing area for research, particularly at a moment when not only scholarship across the social sciences and humanities but also the media are buzzing about the consequences of "big data." New forms of data collection and analysis have massively proliferated in the major institutions of social life encompassing markets, politics, public administration, policing, the legal system, citizens' everyday lives, and even social science research itself. Everywhere instances of human (and non-human) behavior are continuously more and more collected at a large scale, translated into countable entities, categorized and analyzed for the purposes of sorting, valuing, pricing, ranking, evaluating, and governing people and things.
Research has only very recently begun to investigate how these systems of data practices interlock across different social institutions, and - more than merely reproducing social order - actively shape it. Because these practices and technologies thus affect people on an unprecedented scale, it is important to gain empirically grounded insights into how they affect social order and everyday practices and experiences. This panel seeks to put research into conversation which looks into (1) the mechanisms of data production: what is collected, according to which logics it is ordered, and what categories of people and things are produced in the process (2) the political economy of these practices, that is, who collects, owns and sells these large-scale data, and finally (3) to what ends and with what consequences for people's lives these data are used.
Accepted papers:
Session 1Paper short abstract:
The development of massive data collection from patient through the internet of things allows for a profound reshaping of the patient-physician relationship. Drawing from different case studies our research will critically analyse the foreseen transformations.
Paper long abstract:
The rapid development of health applications on smartphones or the investment of the GAFA (Google, Apple, Facebook and Amazon) in the health industry are few examples that demonstrate how informatics and communication technologies are permeating the health sector. Those technologies allow for massive health data collection from various sources as well as their fast treatment by sophisticated algorithms. The use of connected objects enables to real-time collect, from an individual, biological parameters but also different environmental data. Therefore health data are no longer locked into medical practices or hospital but are collected directly in patient's everyday life. Those particular data production mechanisms are reshaping the relationship between medical practitioners and their patients previously based on a hierarchy in favour of physicians. Indeed, the anticipated outcomes of such mechanisms and its uses is to empower individuals to gain control over their health data and to be better informed about their health without being dependent of their physicians.
However, this much anticipated new order can be challenged: is this a new way of controlling and categorising individuals in order to morally asses their health habit from their data? This research will explore the anticipations and possible challenges to this situation by analysing specific existing health connected objects and applications developed in the health sector and their implication for patients.
This research is part of an interdisciplinary research program on Big data and personalised medicine in France (DataSanté Research Program) and the case studies analysed will be selected from its cluster.
Paper short abstract:
Digital insurance, which utilizes sensors to track and influence insured's behaviour, has become increasingly common. In my presentation, I discuss the effects of insurtech-based governance and analyse the ways the policyholders domesticate digital insurance in their everyday practices.
Paper long abstract:
Recently, there have been great expectations for Big Data to revolutionize the insurance industry, for example, by providing more accurate means to calculate risk and set premiums. In this presentation, I analyse one of these data economy prospects, digital life insurance, which utilizes activity wristbands and other wearable devices to track and influence the behaviour of the insured. As technology rapidly changes, it is unclear how the digitalization of insurance affects the insurance logic and the life-style management insurance companies conduct. Furthermore, there is no research on how the policyholders use digital insurance in everyday life. Based on multifaceted qualitative data including focus group discussions, interviews and participant observation, I discuss the effects of insurtech-based governance and analyse the ways the insured domesticate digital life insurance in their everyday practices. I demonstrate that digital life insurance entails ethical norms of good health and prudent action which aim to influence the (health) practices of the insured. However, the policyholders do not simply adopt these ethical norms but they reinterpret and transform them in the domestication process.
Paper short abstract:
We investigate the implications of quantification and datafication for regulatory practices. Enriching a well-tried phase model of regulation with insights from STS, we present a conceptual framework that allows to capture empirically the ways in which datafication transforms social ordering.
Paper long abstract:
Although practices of computer-based quantification pervade modern societies, there is little systematic research on how they affect social ordering. A key site of social ordering is regulation, i.e. the intentional attempt (by states, by other organizations or as self-regulation) to alter behaviour in order to reach specified goals (Black 2002). Yeung (2017) has developed a general framework for studying practices of data-based regulation, distinguishing three analytical components: information gathering, standard setting and behaviour modification. In order to grasp more deeply the material dimension of data-based regulation, we enrich this framework with a number of suitable STS concepts.
Information gathering encompasses both the collection of data and its analysis. We interpret digital data as "immutable mobiles" (Latour 1986): information stored in a format that makes unintended changes or losses improbable and facilitates quick transportation, recombination, and aggregation, thus reshaping the political economy of regulation.
Standard setting is the process of defining which goals are to be attained and how. Far from being neutral instruments, technological artifacts always constitute their own politics (Winner 1980) that potentially superposes governance processes.
Behaviour modification refers to the means by which behaviour is influenced. In order to conceptually grasp the increasing importance of technological architectures, we combine insights on "regulation by design" and "regulation by technology" with an ANT perspective (Latour 1990).
As an illustration of how our framework can serve as a ground for comparative studies, we discuss several concrete cases. We conclude that datafication enables more responsive and more comprehensive forms of regulation.
Paper short abstract:
This contribution discusses practices of data brokers focussing on the production of identity graphs as core feature thereof. Automated mapping of individuals' identities serves various (indistinct) purposes including profiling, scoring, and social sorting with accordingly serious societal impacts.
Paper long abstract:
Embedded in the big data paradigm is the questionable proposition of data being the "new oil of the digital economy" (Wired 2014). Accordingly, data brokerage flourishes, based on the exploitation of information from various sources. Large-scale data gathering practices also comprise information about individuals, including their online and offline behaviour and actions. Behind the scenes, these practices involve various actors, perceivable as "surveillant assemblage" (Haggerty/Ericson 2000). Companies like Acxiom, LexisNexis, Oracle or Palantir Technologies gather and monetize massive amounts of consumer information for data-driven marketing and many other purposes. Palantir, e.g., is among the strategic partners of the NSA; involved in the creation of the surveillance tool "XKeyscore" as revealed by Edward Snowden (Greenwald 2014; Biddle 2017). Among other things, this tool categorises users of privacy-friendly software as "extremists" (Doctrow 2014). Also Oracle cooperates with the NSA. There is thus a close relationship between big data and surveillance (Lyon 2014) where economic practices and modalities of security governance increasingly overlap. Social structures are exposed to this indistinct interplay and incrementally alter through semi-automated practices of (big) data processing (Strauß 2018). This contribution discusses these practices with a focus on a crucial mechanism thereof: the production of identity graphs "including what people say, what they do and what they buy" (Oracle 2015). Basically, these practices exploit our "identity shadows" (Strauß 2011) whereas individuals are profiled in online and offline environments to produce accurate models of their identities, serving various purposes and business cases in private and public sectors.
Paper short abstract:
This paper looks at how knowledge about consumers is co-produced for marketing purposes through big data, which is perceived as being objective and infallible. This has the possibility of changing how knowledge is used in marketing settings, affecting marketing professionals and consumers alike.
Paper long abstract:
Big data analytics (BDAs) are deployed in market research with the perception of being an objective and infallible technology for uncovering consumer behavior and preferences. Especially comparing with traditional methods like surveys or interviews, BDAs are depicted as more reliable for creating consumer knowledge. This assumption often goes hand in hand with a belief in the supremacy of (digital) data. The use of BDAs in organizations however requires a collaboration of an array of teams, all with their own expertise and practices. Knowledge is never produced in vacuo but co-produced, many factors are involved in how information is gained through BDAs, which subsequently influences the usage of this knowledge.
In this early-stage research, I aim at showing how BDAs influence the co-production of knowledge in marketing settings in organizations. When information resulting from big data technologies tend to not be questioned by marketers and are perceived as mirroring the reality, also consumer knowledge risks of not being seen as approximations anymore. Instead conceptualizations of consumers become postulations, incontestable due to being established through big data analysis. These practices tend to neglect the subjectivity involved in BDAs, such as the necessity for data manipulation, data interpretation and the possibilities of errors. At the same time, they intensify the social sorting of consumers. Individuals are reduced to their marketable characteristics for which big data analysis and the accompanying algorithms define the criteria. These marketable characteristics set the rules for who has access to markets, goods or services, and often also their pricing.
Paper short abstract:
The paper analyses how social processes of commensuration underpin European counter-terrorism finance. It focuses on data production of national authorities in charge of generating actionable financial intelligence. It shows that commensuration facilitates the sharing of intelligence within Europe.
Paper long abstract:
This paper analyses how European Union (EU) member states share financial information for countering terrorism finance. All EU members have their own Financial Intelligence Unit (FIU), collecting (large amounts) of financial information from the private sector and transforming it into financial intelligence to be used by governmental actors. This process is mostly organized at the national level. The information derives in particular from banks, who are obligated to report Suspicious Transaction Reports (STRs). Each FIU analyses these reports and upon encountering suspicious activities, they conduct additional research on the identity of the senders, their financial history, criminal records, and other personal information. Hence, information is transformed into (financial) intelligence, gathered in a file and transferred to executive authorities.
However, FIUs (have to) operate also at international level, for instance exchanging financial information with their foreign counterparts upon request and in the framework of bilateral or multilateral agreements. In doing so, they have to transcend the specific cultural, economic, and political knowledges that make up the national STRs and resulting files. This paper pays attention to the social dimensions of these data flows. It examines how financial intelligence is made 'commensurable', that is: how different knowledges are reduced into common metrics (Espeland and Stevens 1998). By analyzing annual reports of European FIUs, conducting semi-structured interviews and participant observations with a FIU, this contribution unpacks how situated national security issues and disparate (national) practices are simplified and made commensurable through processes of classification, standardization, and quantification.
Paper short abstract:
This paper focuses on social media platforms, universities and the labour market. Specifically, it examines (i) how social media companies motivate actors to produce data, (ii) what kind of social relations they structure with the infrastructural design, (iii) what are the consequences.
Paper long abstract:
This paper investigates the increasing role of global private actors in governing higher education by focusing on data and data practices around graduate employability and skills. More specifically, we focus on social media platforms as basic infrastructures in the contemporary platform capitalism (Srnicek, 2017). We draw on three related processes: the increasingly complex and trans-national labour market that makes skills matching difficult (Moore and Morton, 2017); the increasing responsibility of higher education institutions (HEIs) for the employability of graduates (Harvey, 2000); and lastly, the rise of the digital economy, in which digital platforms are basic coordination infrastructures (Helmond, 2015). Not much is known about practices and measures used by HEIs to mediate the transition of students into employment. Moreover, there is a paucity of research into the role and significance of social media, digital platforms and data in employability practices. This paper addresses this gap and presents some first findings of a research project that has collected data from surveying 900 European HEIs, webscraping Web pages of 1000 European HEIs, a sample of user profiles from LinkedIn and document analysis of employers' recruitment practices. Using this rich empirical data together, we discuss the mechanisms of graduate employability data production, the political economy of this data and what are the consequences for students, graduates, HEIs and employers.
Paper short abstract:
Amidst the so-called data deluge, industry cloud providers are emerging as key partners for domain scientists seeking to expand computational capacity. This gives rise to novel forms of currency and (ac)counting practices, producing what I call 'situated valorizations' of resources and engagements.
Paper long abstract:
Ever greater scales of storage and computing power are required as data-intensive research increases its reach and scope. Industry cloud providers surface as key partners for domain scientists as they both work to keep apace. This paper draws from ongoing ethnographic fieldwork with a collaboration between a large industry cloud provider and a group of 'domain' researchers in the life sciences, where the latter deploy the former's resources in their data-intensive scientific workflows. Here, I focus on two innovative features of this collaboration:
(1) Computational resources as currency: I refer here to the practice by which the cloud provider allots a predefined amount of computational resources - what are known as 'cloud credits' - along with technical support free of charge to the scientific researchers.
(2) Novel (ac)counting practices and valorizations: The cloud provider apprehends the domain scientific work first as an abstract set of spikes and dips in an online portal, which provides an account of when the scientists are actively 'spending' their credits. The domain scientists are subsequently prompted to map their scientific practices onto those abstract figures, giving the provider a more substantive account of how they are using their cloud credits.
In this process, both sets of actors are brought not only to evaluate their mutual engagements, but also to produce further 'situated valorizations' that extend to a diversity of spheres - ranging from the creation of compelling marketing materials to deploying user-centered design practices such that cloud platforms can enrol and accomodate increasingly heterogeneous domain-users.
Paper short abstract:
The use of quantitative performance indicators is heavily disputed in the scientific community. However, the circumstances of computation of those indicators also deserve a further look. Actors and infrastructures currently await further investigation.
Paper long abstract:
Approaches of Governance and quality assurance in the science system consist of different instruments - of which quantitative research evaluation is among the prominent ones. A specific form of science evaluation is the computation of performance indicators, which may be used to inform funding decisions or determine the availability of career opportunities.
While peer review is widely undisputed as mechanism of quality assurance in science (Wilsdon et al. 2016), data driven approaches of evaluation - for example the computation of the h-index, the Journal Impact Factor, the counting of patents or Nobel prizes - are heavily discussed in the scientific community (Hicks et al. (2015), Larivière/Gingras (2010), Werner (2015)). Performance indicators are often perceived as powerful and inescapable, which is why researchers supposedly adapt their behavior to the evaluation criteria (Beel et al. (2009), Butler (2004), Franck (1999)), which leads to intended and unindended effects.
The investigation of quantitative science evaluation practices carries questions of comparison and quantification, recognition and control. We are aiming at understanding the computation of performance indicators as 'algorithmic techniques' (see Rieder (2017)), which are contributing to the internal and external regulation of the science system.
We seek to zoom into the computation of those indicators and the underlying data infrastructures and investigate how they come into the world. We hed light on relevant actors and power relations, and thus contribute to a political science of algorithms.
Paper short abstract:
In this paper, we aim to explore how these data are collected, what drives their collection (e.g., their technical accessibility), how they are ordered and how these orderings resonate within existing frameworks and semantics of science such as open science or societal impact of science.
Paper long abstract:
Large scale data practices have not only become established within specific specialties of science but also in current approaches to evaluate and monitor scientific output .One of the major expressions of this transformation towards digital, large scale data practices in scholarly evaluation are so called alternative metrics which collect and harvest digital traces of reception for scholarly output, taking stock of almost every form of communication in the digital universe. In this paper, we aim to explore how these data are collected, what drives their collection (e.g., their technical accessibility), how they are ordered and how these orderings resonate within existing frameworks and semantics of science such as open science or societal impact of science. We argue that the classification of metrics from novel forms of scholarly output have not only been driven by specific technical or infrastructural conditions that prefigure specific categories of valuation but also by specific regimes of legitimation in the socio-political governance of science. Our material consists of almost 400 scholarly articles, position papers as well as more than 20 interviews of digital platform providers, publishers and scholars in the realm of digital science evaluation.
Paper short abstract:
This methodological paper outlines an ethnographic approach to studying how social media data is used in social research. The approach draws on various frameworks in STS and related fields, and outlines techniques that can trace material, methodological and epistemological transformations of data.
Paper long abstract:
This methodological paper outlines an ethnographic approach to studying how human geographers use georeferenced social media data in practice. STS and related approaches argue that understanding knowledge practices requires a joint focus on epistemology, intersubjective, material and institutional processes. However, there is a shortage of empirical studies that examine computational social science from such analytical perspective. Existing studies either investigate epistemology and methodology, or interrogate the interpersonal and institutional dynamics associated with data practices. Linking these perspectives necessitates techniques that can trace the material, methodological, epistemological transformations of data in action, which requires drawing on distinct frameworks in STS and related fields.
This project interrogates how social media data is enacted in human geography research, and associated changes in knowledge practices for two reasons. First, human geographical explanations often have direct implications to how we understand and enact social order. They can highlight relationships between seemingly unrelated processes by reconceptualising scale, distance and stasis, thus questioning arguments and practices that reinforce the status quo. Second, there have been longstanding epistemological debates within human geography regarding the benefits of quantitative and qualitative methods. Social media data can be analysed using both of these strategies and combinations thereof. They can consist of geotags, text and / or images, and result from situated activities. This is different from data previously used human geography that was either amenable to quantitative or qualitative analysis. Understanding how scholars engage with such novel multimodal data can highlight opportunities for dialogues between the quantitative and qualitative research traditions.