Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Ros Williams
(University of Sheffield)
Gina Neff (University of Cambridge)
Helen Kennedy (University of Sheffield)
Send message to Convenors
- Format:
- Traditional Open Panel
- Location:
- HG-05A33
- Sessions:
- Wednesday 17 July, -, -, -
Time zone: Europe/Amsterdam
Short Abstract:
As STS scholarship evidences, digital technologies are not always good for societies. However, there has been increasing appetite to delve into how digital technology might do social ‘good’. As part of the UK-based ESRC Digital Good Network, we invite interventions opening up questions of the good.
Long Abstract:
As STS scholarship has repeatedly evidenced, digital technologies are not always good for societies, even though they are often framed in public discourse as panacea, salve or fix. Increasingly, however, there has been an appetite to delve into how digital technology might do social ‘good’. A range of speculative and design methodologies, many of which require disciplinary traversal beyond the social sciences and humanities, are being deployed to generate ideas of what constitutes the ‘good’, for whom, and how digital technologies might be part of the endeavour to bring the good into being.
An orientation toward the idea of good is, of course, to take a normative stance. It invites us to imagine ‘the digital good’ in its multiple potential ways. To ensure that digital technologies have good outcomes for people and societies (indeed, to discern what a ‘good outcome’ might even be) we need to turn our attention to what the digital good should look like and how it can be achieved. Whilst the many potentials for harm should be borne in mind and taken seriously, this turn requires us to move (our critical senses intact) towards imagining futures and their attendant digital technologies.
As part of the UK-based ESRC Digital Good Network, which comprises an interdisciplinary mix of scholars from across STS, psychology, computer science, design, and critical media, we invite interventions that seek to open up the question of the good. We are keen to learn from contributions that reveal the challenges and possibilities of cross-disciplinary thinking in this area; that are interested in “what should be” rather than "what shouldn’t be”; that centre ideas of social equity, sustainability, resilience/wellbeing; that are open to generative, future-focused enquiry into what ‘the good’ is, as well as how digital technology might be employed to move us closer toward it
Accepted papers:
Session 1 Wednesday 17 July, 2024, -Short abstract:
Digital technologies are not always good for societies. However, there is increasing appetite to consider how digital technology might do social ‘good’. We may never arrive at a settled definition of a good digital society, but we contend that discussing what it might look like is essential.
Long abstract:
Digital technologies aren't always good for societies; even the most well-intentioned technologies can end up doing harm. Algorithmic decision-making can introduce bias under the guise of fairness; policies to make social media safer can be experienced by users as doing the opposite. It is in this context that Ruha Benjamin exhorts: ‘Remember to imagine and craft the worlds you cannot live without, just as you dismantle the ones you cannot live within’. Yes, we should challenge how things are. But we must also identify how we want them to be. To this end, in this paper, we speculate on whether the notion of a good digital society helps us do that.
We confront the complexity of the word 'good', reviewing philosophical traditions, computing and design ethics and political economic analyses. We consider who gets to decide whether/how/when/where/for whom digital relationships are good. We explore current debates about the relative merits of fairness, accountability, transparency on one hand, or justice, equity and rights on another as evidence of how contested the notion of a good digital society can be.
The paper acknowledges the scale of the challenge. Nonetheless, thinking about what a good digital society looks like is essential. We need to hash out differences, disagree with each other. We may never arrive at a settled definition of a good digital society. To imagine and craft worlds we cannot live without, though, we need to talk about what kind of digital society we want to live within.
Short abstract:
I propose a distinction: for-good intervention, with a common structure of a benevolent intervener purportedly distributing benefits to beneficiaries; versus the appropriation of digital technology for social mobilization, solidarity, and survival: pointing toward Petty’s concept of co-liberation.
Long abstract:
This talk contributes to discussions of what ‘the digital good’ might mean in STS by proposing a bifurcation of existing endeavors that can be said to have been directed toward ‘the good.’ On the one hand, I use the term 'for-good intervention' to bring together endeavors that have the common structure of a benevolent intervener who purportedly distributes benefits to another group of beneficiaries. These endeavors include international development and ICTD (Escobar 1994, Ferguson 1994, Pal 2017, Irani 2019), humanitarianism and humanitarian tech (Fassin 2010, Madianou 2019), philanthropy and digital philanthropy (McGoey 2015), and corporate social responsibility, including from large tech companies (Welker 2009, Pei, Olgado, & Crooks 2021). On the other hand, I group together endeavors that have appropriated digital technology for social mobilizations, solidarity, and survival (Benjamin 2019). I propose an analogy to Fanon’s (1961) distinction between decolonization that was voluntary on the part of the colonizers, where the colonial compartmentalization of society is maintained; and independence that was not granted voluntarily by the colonizers, where greater potential existed to remake society fundamentally. Voluntary benevolence from relatively powerful groups can have important material effects for oppressed communities, and even more so for those who are the helpers (Malkki 2015), yet tend not to address the structural conditions that produce inequality. I draw our attention to Tawana Petty’s (2018) work on the concept of co-liberation to counter conservative tendencies in for-good intervention.
Short abstract:
The operative liberal concept of privacy is closely related to property; default frames for privacy protection feature "owning" one's data. Yet efforts towards a just digital future require re-thinking privacy via feminist scholarship on self-ownership, responsibility, and care relations.
Long abstract:
Protecting privacy is key to digital technology development. It is particularly salient for the FemTech industry given the ongoing criminalization and surveillance of (women's) reproductive capacities; the 2022 US Dobbs decision to revoke abortion rights unleashed a flood of concern about period tracking apps, for example. Drawing from this empirical context, this paper offers a conceptual exploration of privacy. The operative liberal concept of "privacy" is closely related to property, and consequently the default frames for privacy protection feature "owning" one's data and making decisions about it. Yet the new political-economic regime Zuboff has called "surveillance capitalism" centers precisely on digital data being owned by tech companies providing now-essential services. Efforts to protect privacy often operationalize a sort of salvage work, wrestling data ownership back into consumers/citizens’ hands. Yet this is almost inconceivable with respect to the monopoly power of big American tech companies, while citizens report resignation to a “no-exit situation” and that sharing one’s data is an acceptable tradeoff for convenience, efficiency, and participation in the latest digital frontiers. The “right to be forgotten,” a key principle in the EU’s GDPR program, is hard to imagine given the current opacity of digital tracking practices. This paper asks whether efforts towards a just digital future require re-thinking privacy otherwise than ownership. Taking inspiration from Petchesky's 1995 "The Body as Property: A Feminist Re-Vision,” I consider privacy via responsibilities and care relations. What would need to change to make such a transformation and its concomitant relations of trust possible?
Short abstract:
Amongst calls that ‘the internet is trash’, exists a nostalgia for ‘the good bits’ of the internet. This paper reports on a research workshop called ‘Towards a Positive Internet’, which drew on speculative fiction methods to challenge the perceived internet ‘crisis’.
Long abstract:
Amongst calls that ‘the internet is trash’ (Buck et al., 2020), exists a nostalgia for ‘the good bits’ of the internet. In July 2023, 22 scholars and practitioners from Australia participated in a two-day speculative fiction workshop called ‘Towards a Positive Internet’. This paper reports on this workshop which emerged through a desire to move away from ‘big critique’ (Burgess, 2022) and challenge the perceived internet ‘crisis’. In designing the workshop, we took inspiration from bell hooks’ (2000) ode to love, to strategically and playfully lean into kitsch and femme aesthetics; a juxtaposition to the monochrome and masculine logics that dominate major platforms. In practice, these considerations materialised through confetti, stickers, references to fandom, and tactile activities involving colour, glue, clay, blocks, and collage. The workshop activities scaffolded from reflections on past practices (what did we love about the internet of old?) and moved to present observations (what do we love about the internet now?), before asking the group to speculate and ‘backcast’ from the future, how we get to a positive internet. Participants were generative and generous and offered critical reflections: asking what even is a ‘positive’ internet? for whom? and through what means? Emerging through these engaging discussions were emphatic descriptions of internet nostalgia, including: intimacy with strangers, desires for customisation, the value of interoperability, the counterintuitive joy of the slow and purposeful, and a conviction that a positive internet is not only possible, it presently exists amongst the trash.
Short abstract:
Locating responsibility for the 'goodness' of a digital service is challenging. In many high-impact contexts, responsibility for outcomes rests with some form of professional. These professionals are governed by associations, whose relationship to ‘digital goods’ are the focus of this paper.
Long abstract:
Our paper explores how professional governance associations function as fora through which professionals (e.g., doctors, lawyers, public planners) narrate and contest situated notions of ‘the good’, including in relation to digital practices (cf. Nissenbaum 2009). We first describe how professional associations have recently tended to engage digital governance issues as matters of efficiency and/or as potential sources of liability to be managed (Greenleaf 2017). We argue that these associations might draw on their existing forms of leverage (e.g. professional certification, continuing education requirements, disciplinary authorities) and governance logics (e.g. conflict of interest, duties of care) to not only negotiate but to enact collective articulations of context-specific ‘digital good’.
What makes professional associations a potentially generative site for locating the ‘digital good’ are their participatory, situated, and adaptive characteristics. Recent moves to integrate non-professionals (e.g. patients, clients, citizens) into professional governance processes indicate increasing interest in addressing structural inequities through centering the voices of those ‘living closest to the problems’ (e.g. Liao & Ma 2019). However, professional governance bodies are in no way immune to capture by powerful actors in digital political economies. Within these contexts, we describe how data governance, IT vendor management, and automation are flexibly interpreted and narrated with respect to shared professional values, norms, and interests (cf. Latour 1984). We offer our own experiences developing a cross-disciplinary training program for professional students to support their principled and strategic engagement with their respective professional governance bodies towards the realization of context-specific ‘digital goods’ (cf. McDonald & Gansky 2023).
Short abstract:
Platform companies rely on Trust and Safety teams to advance “good” behavior. This ethnographic study reveals that T&S professionals navigate conflicts between achieving “good” and their company’s business goals and organizational dynamics, structurally conditioned by the nature of these companies.
Long abstract:
Platform companies make decisions surrounding what user-generated activity is admissible. The work of defining, preventing, and stopping the abuse and misuse of platforms is tasked to “Trust and Safety” (T&S) teams. These teams act against phenomena like “fraud,” “account takeovers,” “misinformation,” or “spam”. The design of these strategies, including policymaking and operationalization, relies on normative ideas of “good” behavior. In this paper I ask: how do Trust and Safety professionals work towards the goal of “good?”
To answer this, I conducted an ethnographic study of the T&S field, including interviews with 30 T&S professionals. I find that T&S professionals engage in constant “persuasive work,” aiming to connect the normative goals of “good” to the incentives, designs and limitations of platforms companies. Often times, T&S professionals perceive their normative goals in contradiction with the business aims of the company, like user growth or profit. To address this, they embrace the language of “risk” and engage with stakeholders throughout the company. Furthermore, T&S workers also see their normative goals challenged by the organizational dynamics of the firms they work at. As many platform companies are publicly traded, leadership often responds to shareholder pressure or anticipates shareholder behavior. This leads to frequent changes in organizational strategy and design, including layoff and “re-orgs,” which limit the perceived success of these normative strategies. Ultimately, my work shows that the achievement of normative goals of “good” by contemporary platforms are structurally conditioned by the incentives, designs and limitations of platforms companies.
Short abstract:
This paper is based on a short-term qualitative study completed as part of the Digital Good Network internship programme with the BBC. This research explores the experience and perceptions of participants on an internal Microsoft Copilot pilot at the BBC to provide insight to guide future rollouts.
Long abstract:
Technological advancements have created profound shifts in the way work is performed. Some suggest technology enables greater productivity and efficiency at work, speeding up the completion of tasks through automation and increasing industrial output. Others fear widespread unemployment caused by the replacement of human labor by machines.
Recent developments in AI have been no different. Public discourse is dominated by conversations around the transformative impact of Generative AI to work, coupled with growing fears about the potential displacement of workers by this new technology. Scholars, policy-makers, and industry have not reached a consensus on whether generative AI is "good" or "bad" for workers.
This paper is based on research completed as part of the Digital Good Network internship programme with the BBC. This research explores the experiences and perceptions of participants on a recent internal pilot programme of Microsoft's LLM-powered productivity tool, Copilot. The research seeks to establish an understanding of whether the tool allowed people to do more of the work they enjoyed and found purposeful, and less of the work they didn't, and whether there were variations in this experience. It provides recommendations for a responsible meaningful work approach to GenAI adoption that ensures that all workers can have equal access to the tools and resources to facilitate meaningful work.
Short abstract:
The digital good should serve the public good, as defined by public interest. Technologies are products and producers of politics. So we need civic-minded technologists with political education and attunements to acknowledge and integrate in design technology's sociopolitical dimensions.
Long abstract:
Engineering’s “culture of disengagement” (Cech 2014) casts a long shadow on society. The anemic civic philosophy, preached by lauded tech heroes, pretends politics and power don’t apply to technology, that we can reduce most problems to technical challenges, and that meritocracy is justice. There are bright spots—individual, civic-minded technologists; the Tech Workers Coalition; and the Integrity Institute, a community of practice for “trust and integrity” professionals from technology companies. But it’s insufficient. To solve the challenges of contemporary society and democracy, entwined with sociotechnical systems, we need to understand technology’s civic landscape and reframe the technical expert’s role in democracy.
Engineering has a rich history of political activism and rumination about its social and civic responsibility (Layton 1986; Wisniowski 2016). And STS has long tried to understand and define ethical technology. However, computing has grown more deprofessionalized over time, loosening its ethical tethers. Simultaneously, there are growing concerns about the role technologists play in society. So how should civic-mindedness intersect with the education and daily practice of technologists?
I’ve conducted 17 interviews with leading engineering educators, looked at the history of civic engagement and civic-mindedness in engineering and computing, and worked on defining civic professionalism in technology. My research supports an argument that technologists need a political education. Unfortunately, civic learning is scarce in most undergraduate programs and even secondary schools, and it’s particularly uncommon in computing. So we must define and invest in civic learning and a civic culture in computing, because the digital good really demands civic-minded technologists.
Short abstract:
Exploring frugal innovation cases, we examine the creation of social ‘good’ via frugal digital technologies which, we argue, need to be contextually responsive, relevant and appropriate, with an adequate fit with the local circumstances and considering not only accessibility but also practical use.
Long abstract:
Frugal Innovation (FI) aims to provide affordable, simple, and accessible solutions in resource-constrained contexts by focusing on essential functions and minimizing resource usage. Digital technology solutions are frequently related with FI as they can help in overcoming resource constraints to enable access to basic products or services such as energy, healthcare and/or education for poor and marginalized communities.
Akin to the debate on ICT4D, the incorporation of digital technology in FI requires an understanding of the complex and socio-cultural dynamics within vulnerable contexts. Critical issues include power imbalances, the digital divide, and the potential for inadvertently raising exclusion or widening inequalities. Recognizing these challenges, our proposal turns attention to how digital technology might be used in FI to promote social good.
We provide examples from cases that are part of our ongoing research as an interdisciplinary team of scholars and members of the Latin American Frugal Innovation Network (RELIF). We examine the cases from two angles. First, we analyze digital FIs that provide formerly un(der)served users with access to products and services. Second, and less covered in existing literature, we explore the use of frugal digital technology for improving internal organizational processes in social enterprises and mission-driven nonprofit organizations.
We argue that to bring us closer to the creation of social ‘good’ frugal digital technology solutions need to be contextually responsive, relevant and appropriate. They shall be designed and implemented for an adequate fit with the local circumstances and considering not only accessibility but also practical use.
Short abstract:
In the process of designing a data collection method that could be used by home care workers and advocates to hold employers and policymakers accountable, I explore what “good” means in this context—whose perspective would be centered, when change would be enacted, and what needs to be sacrificed.
Long abstract:
The United States is facing a caregiving crisis where older adults and people with disabilities are not able to receive crucial home and community based services because of a dearth of home care workers. The technology that has been proposed to address this crisis focuses on the “fraud, waste, and abuse” in the system and further burdens, surveils, and invisibilizes workers. My research explores how we might use technology to help workers and advocates collect data on unpaid work that they could use to identify wage and hour violations and make the case for fairer pay. This could be seen as “good” because the data could be used in a “reversal of the Foucauldian panopticon” to hold employers and policymakers accountable. However, the process of designing this technology questions what “good” means in this context—whose perspective would be centered, when change would be enacted, and what needs to be sacrificed. I aimed to work closely with worker advocates in order to leverage their knowledge and reduce the burden on workers, but also note some of their biases and priorities may differ. Even though workers could receive collective benefits through a class action case or policy change in the long-term, it is very unlikely that they would necessarily individually benefit in the short-term. Moreover, the data collection process requires workers sharing sensitive information and using unfamiliar technology. Therefore, I continue to examine tensions in meaningful engagements with marginalized communities, the role of technology in collective action, and mitigating harm.
Short abstract:
What happens when data is harnessed to explicate rather than obfuscate values, expose rather than perpetuate discrimination, and pluralize rather than homogenize quantified parsings of the social world? I present lessons learned and questions raised by efforts to do “good” with data.
Long abstract:
Data-intensive technologies have been thoroughly critiqued for a number of negative outcomes: they can obfuscate contentious values through the automation of decision-making; they can discriminate against marginalized people through the reification of biases held by their designers and users; they can homogenize sociocultural landscapes through the hegemonic adoption of quantification. But done with care and intention, data-intensive technologies may also be leveraged to the opposite effect. After spending four years ethnographically studying a program called “Data Science for Social Good” and another five years as a lead organizer of said program, I will share stories of projects designed to explicate values rather than obfuscate them, expose discrimination rather than perpetuate it, and pluralize quantified parsings of the social world rather than homogenize them. Drawing on nine years of research and personal experience, plus new data from a recently completed study evaluating the longer-term impacts of these well-intentioned projects, I will present lessons learned and questions raised by efforts to do “good” with data.
Short abstract:
Embedding IVR experiences into the curriculum through collaborative work between students and teachers, could transform spaces and relationships in the classroom. It could lead to the development of digital literacy skills, and the cultural and social capital, of both students and teachers.
Long abstract:
Record numbers of U.K. teachers left the profession in 2023 (Henshaw, 2023), an issue which disproportionately impacts the most disadvantaged schools (Amitai & Van Houtte, 2022). These statistics also run alongside record student absences, and year on year increases in children being home-schooled in the wake of the pandemic (Adams, 2024). Student behaviour in schools has also been cited as an exacerbating issue, with NASUWT finding that 90% of teachers experienced verbal abuse or violence in their report published in 2023. Additionally, 25% of pupils reported feeling unsafe at school in 2023 (Weale, 2024). Considering these statistics together demonstrates how strengthening the relationships between teachers, students and schools has never been more important.
IVR not only has the potential to open up the space of the classroom by offering access to different places and perspectives, its presence in classrooms could also spark the beginning of a transformative approach to teaching and learning. IVR demonstrates potential in enabling experiential learning, but also through access to and engagement with the technology itself, which could make positive contributions to students’ and teachers’ digital literacy skills. This also feeds into the development of cultural capital (Bourdieu, 1986). Building on the development of digital literacy through IVR could enable an environment where teachers and students collaborate and share ideas for integrating IVR into the classroom and curriculum, enabling meaningful integration of the technology and building positive relationships through and with technology. This feeds into the development of social capital (Bourdieu, 1986).
Short abstract:
This presentation considers the possibilities & challenges of using digital tools to combat political disillusionment. Specifically, it appraises parliamentary e-petitions systems as a tool to bring citizens ‘in’ to political institutions by focusing on equity of access and platform sustainability
Long abstract:
In a political climate replete with disillusionment, political institutions around the world are seeking to address declines in participation by reinvigorating the public’s involvement in politics. The UK Parliament is one example. It has employed new digital technologies to improve public understanding of what Parliament is and equip citizens with the tools to engage with their representatives, to have their voices heard and to influence the policy making process.
Based on my PhD findings, I explore the possibilities and challenges of using digital technologies to open Parliament up and bring a centuries-old institution into the 21st century. I focus on the opportunities that the (joint) House of Commons and Government e-petitions system have brought in bringing citizens ‘in’. Namely, the public’s ability to engage directly with elected representatives and influence the parliamentary agenda.
I consider what it takes to ensure equitable access to Parliament through e-petitions by comparing the experiences of individual e-petition creators versus e-petitions created by campaign groups. Asking whether access, use and outcomes are the same speaks directly to the question of ‘good for whom?’ By reflecting on these challenges, I consider whether an e-petition system is a sustainable platform on which to build a more engaged and, one can hope, excited citizenry, or whether it is another avenue towards inequity and disillusionment.
With Parliament underway in using digital tools for public engagement, now is an opportune moment to reflect on how these tools can bring us closer to a politically engaged citizenry.