Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Morgan Currie
(University of Edinburgh)
Catherine Montgomery (University of Edinburgh)
Karen Gregory (University of Edinburgh)
Send message to Convenors
- Format:
- Traditional Open Panel
- Location:
- HG-05A33
- Sessions:
- Thursday 18 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
This panel focuses on datafied publics: groups that politically mobilise because they are subject to commercial and government data systems that algorithmically govern them but remain opaque. The panel will give datafication in public participation renewed theoretical-empirical attention.
Long Abstract:
Marres’s term ‘material participation’ describes how objects facilitate political action through everyday doing and making (2012). It is a useful theory for thinking about data publics – for instance, people with similar technical interests who come together to work with datasets and software to political ends. We can also use material participation to understand data publics in another way, by considering how data has “the capacities to organize publics” under the gaze of data-intensive systems (Ibid p. 9). We can ask how people politically mobilise when they are subject to commercial and government data systems that algorithmically govern them but remain opaque. Such collectives come together as a particular type of data public – a datafied public – to understand the ways they are sorted, shaped and targeted and to demand greater control over these processes.
How do datafied publics take shape and what forms of participation do they engage in to? What role do calls for transparency and democratic oversight play towards actual, substantive political accountability of these systems (Annany and Crawford 2016)? From health to social security and policing, the role of datafication in public participation demands renewed theoretical-empirical attention.
We welcome panelists who look at a range of datafied publics, such as:
• patient populations ‘enriched’ for inclusion in clinical trials
• workers of digital platforms in the gig economy
• groups who are unduly targeted by risk prediction models in social security and child services
• policing and border control data that target certain communities
• those muted by online platforms for certain content or identities
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989. https://doi.org/10.1177/1461444816676645
Marres, N. (2012). Material Participation: Technology, the Environment and Everyday Publics. Palgrave Macmillan.
Accepted papers:
Session 1 Thursday 18 July, 2024, -Short abstract:
This paper seeks to understand what mobilizes data publics to go against Big-Tech companies. We selected key digital rights practitioners to understand from them what has been their experience in working with people to change the power asymmetries we have with Big-Tech companies.
Long abstract:
This paper seeks to understand what mobilizes data publics to go against Big-Tech companies. We selected key digital rights practitioners to understand their experience in working with people to change the power asymmetries we have with Big-Tech companies. From the semi-structured interviews four themes emerged: first, the importance of contextual awareness about the way(s) data can be used and abused in their everyday life context. Second, distinguishing between real or imagined concerns around data and how that can inform and push people into action. Third, identifying who is responsible for these problems so that people know who to demand accountability and scrutiny from. And fourth, thinking and imagining resistance possibilities.
Following from the findings we identified five possible routes that can enhance citizens’ data citizenship’ (Carmi & Yates, 2023), by finding what each actor can provide:1. Governments - providing robust legislation and enforcement; 2. Big-Tech - providing transparent policies, developing user-friendly policies and support mechanisms; 3. Media raising awareness about everyday contexts of harms and risks of different communities; 4. NGOs - raising awareness, help develop negotiating power for society; 5. Society - Using existing mechanisms such as Citizens Advice, using alternatives, using networks of literacy for data literacy.
Importantly, we found it was difficult for the NGO practitioners to imagine how an ‘ideal world’ would look like. We argue that once we can imagine and verbalize how we want our data-driven future to look like, it will be easier to start to pro-actively strategize and work towards it.
Short abstract:
We investigate how participatory data-based projects maintain their relations to the community they emerged from, or aim to serve. To do so we conduct case studies of six projects which offer alternatives to the circumstances they critique, through participatory governance and maintenance processes.
Long abstract:
Participatory data-based projects often develop in response to data-malpractices. Such projects can be institutionalisations of community organising of datafied publics themselves (bottom-up) or they may be reactions of established organisations, aiming to serve and include affected groups (top-down).
In both cases, the professionalisation and management of these projects raise the question how those organisations maintain their relations to the community they emerged from, or aim to serve. To understand this question, we are conducting case studies of projects, which offer alternative solutions to the circumstances they critique or contributions to dissolve opaqueness; by employing participatory processes in the project’s governance and maintenance. In particular, we are investigating how participatory elements are employed to follow people's needs and wishes.
We analyse three comparative pairs of cases (one top-down, one bottom-up):
- two alternative menstruation tracking apps, which process data on the user device rather than the provider's end;
- two resources for reverse engineering ad-targeting;
- two speech training datasets balancing multi-dimensional biases.
Following Light et al. (2018), we conduct a walkthrough of the project's interfaces to the public and analyse the vision, operating model and governance. By interviewing people in the facilitating organisation we further investigate the organisation of participation and the constitution of the participating public.
We aim to provide insights into how the interests of datafied publics can be supported and into the challenges of structurally organising respective relationships. Further, we hope to sharpen our understanding of the interwovenness of normative and hands-on participation within the panel.
Short abstract:
This project is about the politics of data literacy and data rhetoric across federal, regional, and local scales of public planning expertise. We examined a collection of publicly archived teleconference workshops that teach local planners how to use various data tools.
Long abstract:
Given the racist histories of mapping practices that actively construct today’s geopolitics in the United States (e.g. redlining), we must critically investigate how jurisdictions maintain power through data literacy initiatives that uphold some interpretations of public life over others. This project is about the politics of data literacy and data rhetoric across federal, regional, and local scales of public planning expertise where scopes of oversight are manufactured by resource allocation. Paternalistic education produces knowledge hierarchies that maintain dominant interpretations and applications of data. In other words, though education may have the potential to increase inclusion and participation, here it remains a mechanism which reifies the politics of scale and asserts the power of oversight.
In Southern California, the regional planning organization directs local planners’ data literacy through weekly workshops. Importantly, the top-down flow (federal to state, state to region) by which education occurs, should be viewed critically. To such critical ends, we examined a collection of publicly archived teleconference workshops that aim to teach local planners how to use various data tools embedded within best practices, recommended workflows, and policy frameworks. Our analysis investigates this knowledge production and circulation of public planning data tools. We offer insights into how the production of this jurisdictional hierarchy within the planning profession is reified through such education initiatives.
Short abstract:
This paper examines the historical and ongoing processes through which urban publics are constructed as in need of police-centered surveillance — as well as the practices of resistance that challenge this conception as a carceral racial logic.
Long abstract:
This paper examines how municipal surveillance technology remediates the language of public administration and governance through a carceral, racial logic — in particular, by shaping which actors have a right to articulate and decide upon ‘exigent situations’ within the city.
I attend to the political work of securing public opinion by police — within an invoked ecosystem crime, crisis, and exigency — such that urban populations can be hailed as citizens who want surveillance, even if they do not realize it (yet).
Drawing from situated, anti-surveillance activist research in San Diego, I trace how the purported benefits of the ‘smart city’ come to be framed not through the notion of a ‘common good’ — that is, a site where issues of privacy ought to be weighted against promised, if nebulous, ‘future-oriented’ civic gains. Instead, municipal actors like the police move strategically to claim virtues such as ‘transparency’ as a quality that they guarantee, positioning themselves as good municipal stewards of urban data and surveillance — a claim that relies upon and reproduces a carceral logic demarcating which groups have access to the care of the state, and which are in need of management and policing.
In sum, this paper looks beyond discourses of surveillance technology that seek to evaluate their technical efficacy or technological promise; rather, it attends to the ongoing processes through which urban publics are constructed as in need of surveillance. Finally, I examine ongoing fights for transparency and accountability that engage surveillance on this political terrain.
Short abstract:
After more than ten years, it is still unclear how mature the field of health data cooperatives is. Our study might be the first independent study of health data cooperatives. Data coooperative models encounter specific day-to-day challenges, which must be addressed to prove them their viability.
Long abstract:
After more than ten years since the apparition of the first health data cooperatives, it is still unclear how mature the field is. We found no independent empirical research reporting on the experience of sector leaders, and could not read about the key day-to-day challenges and best practices. This is striking, given the prominence that the model has been given in public debates on the alternatives to extractive data governance models. Our study might be the first independent empirical study of health data cooperatives.
We scoped the field with a literature review, and conducted a case study of two leading initiatives in Europe: Salus.coop from Barcelona and Liverpool Civic Data Cooperative. We interviewed cooperative managers to discuss the state of the art. Building from our interest in infrastructure, organisation and epistemology intersections, we assumed that the processes, principles and templates stipulated by the data cooperative model must have specific consequences for the conduct of research that it exists to enable. If confirmed, we also wanted to know how data cooperatives are tackling emerging challenges, and what best practices can be identified.
We confirmed the existence of specific challenges, but also the lack of specific strategies to mitigating them. Some challenges are persistent, but have been neglected as other issues that are perceived as more urgent and fundamental are given priority. Until these issues are explicitly tackled with, the sector will not mature into a tested data governance alternative. To this day, health data cooperatives are still largely an unproven model.
Short abstract:
This talk will present an analysis of how community organizers succeeded where journalists failed, mobilizing diverse publics by translating the potential harms of "smart" streetlights by drawing on diverse community histories of institutionalized violence.
Long abstract:
This paper sketches an epistemology of technology drawn from the practices of residents, especially community organizers, attempting to know “smart city” projects in San Diego, a highly militarized border city. I present two years participant-observation of the work of a 30-organization coalition that responded to San Diego’s acquisition of 3,000 surveillance-technology equipped streetlights by calling for oversight policies while also defunding several smart city technologies, including the “smart streetlights” and Shotspotter gunshot detection systems.
When journalists first published about the “smart” streetlights, there was little public outcry – only whispers among San Diego’s governmentally-connected civic tech publics. A year later, however, racial justice organizers were able to translate the technology into narratives that could mobilize wider publics to make demands on the city. Organizers’ accomplished this not only by investigating the technologies’ present configurations but by speculating about target technologies’ future possibilities. Organizers drew not only on documentation of or interaction with the technologies or technology companies, but personal and community histories of their interactions with the institutional sponsors of the technologies, such as San Diego Police Department, the FBI, or Border Patrol. Organizers drew on differentiated histories of repression, enclosure, and extraction to imagine the affordances of the smart streetlights and diversify the publics engaged in the controversy over surveillance in San Diego’s smart city.
Short abstract:
This project interrogates how policy discourses within the EU and UK position workers as participants in anticipating and mitigating harms stemming from the deployment of data-driven systems in workplaces. It assesses how datafied publics are anticipated through policy instruments.
Long abstract:
Policy discourses on the accountability of data-driven systems often pose the participation of workers as key in mitigating algorithmic harms experienced in the domain of work. Methodologies for anticipating risks and opportunities such as Algorithmic Impact Assessments consider workers as stakeholders promising a fairer or more equal deployment of data-driven systems in workplaces. My research is interested in how these initiatives anticipate and produce datafied publics in the realm of work.
By conducting a document analysis of policy documents and grey literature across the European Union and the UK, this project critically assesses how AI policy and data governance position workers as participants in mitigating, containing, and anticipating harms occurring through the deployment and use of data-driven systems at work. In doing so, it asks how workers are anticipated to play a role in the deployment and design (ex ante) as well as in the scrutiny (ex post) of data-driven systems. Based on this, I will develop a systematisation of regimes of anticipation: How do they imagine organizational structures, workers’ roles, knowledge contribution and practices, institutional processes of participation, and anticipated harms addressed in the analysed methodologies. This provides a framework by which to assess how datafied publics are constituted by participation processes in various industries including the gig economy.
Underlining this research is the question of whether these processes are aspiring towards the normative vision of participatory parity (Fraser, 1992) wherein workers’ knowledge, status and position can meaningfully shape, refuse, and influence data-driven systems at work.
Short abstract:
This research examines the controversy surrounding the 2021 Facebook Files/Papers leak, analysing how different publics and issues were more prominent in making the disclosures public across multiple media settings, shedding light on the struggles to hold platforms accountable in digital societies.
Long abstract:
Big Tech has been involved in numerous platform controversies in recent years. Data leaks, whistleblowers, and social experiments have raised alarms about platforms dubbed toxic and unaccountable power and, more broadly, about the increasing crisis of accountability in digital societies (Khan, 2018; Marres, 2021; Nissenbaum, 1996). In the intersection of digital sociology, media studies, and STS, this research explores how different publics across journalism, activism, politics, and research make and unmake connections between social media platforms and societal harms. The study focuses on mapping a specific platform controversy: the Facebook Files/Papers, a leak in 2021 of internal documents from Meta by the former employee Frances Haugen. The disclosures exposed what Meta knew about the consequences of its interface designs, data, and algorithms (Hendrix, 2021; Horwitz, 2021). By combining digital and ethnographic methods, I am following the disclosures across different media settings to analyse how the disclosures were made public and which publics and issues gained prominence during the controversy. As I will show, critics promoted a ‘strategic causalism’ to solidify the connection between Meta platforms and specific societal harms, countered by Meta's 'strategic ambiguity' to undermine such claims. This production of ambiguity would be crucial for platforms' power and the intended dispersion of their responsibility. Moreover, some actors and harms became more salient than others in a very US and EU-centric theatre of accountability. This will allow us to understand better the translations and asymmetries that occur when platforms are put on trial.