Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Brit Winthereik
(IT University of Copenhagen)
Benjamin Lipp (Technical University of Denmark)
Cathrine Hasse (Aarhus University)
Anders Kristian Munk (Technical University of Denmark)
Søren Riis
Send message to Convenors
- Discussant:
-
Tanja Schneider
(Technical University of Denmark)
- Format:
- Traditional Open Panel
Short Abstract:
How is the human enacted in human-centred digital innovation and to what effects? We invite submissions that engage empirically, critically and/or experimentally with the practices that mobilise the human in quests for responsible digitalization.
Long Abstract:
In the face of the digital re-distribution of agency and decision-making through artificial intelligence and related digital technologies, we witness the re-emergence of human-centred innovation as a vehicle to re-align digital technology with human needs. We see both research and policy initiatives promoting human-centred AI, human centred design of digital systems, human centred leadership, or human centred business practices.
However, it is rarely (if ever) made explicit which version of the human is being centred and, equally consequential, which versions of the human are being decentred in those processes. Rather, the human is taken for granted as if it was self-evident what capacities we associate it with, how we delimit it, or which values we take it to hold. To further complicate this situation, claims of diversity, risk to be thwarted by the exclusionary assumptions implicit in innovation practices seeking to become human-centred.
This panel invites contributions that inquire into the enactment of the human in human-centred digital innovation figurations and processes. We welcome empirical contributions, critical theoretical reflection, and experimental engagements with how specific versions of the human come to be taken for granted, while others become sidelined or marginalized. This includes the concrete consequences of how digital technology is being developed and policy interventions are framed.
Contributions might for example discuss, although they should not limit themselves to, the following questions:
● How is human-centricity practiced at digital innovation hot spots and/or in policy making?
● What concepts of the human are used to make diversity manageable at such sites?
● How can existing methodologies, concepts, and models be improved and can it be done with STS?
● Is it possible to we leverage human-centredness for engaging issues of equity and diversity within infrastructure studies and post-humanism. If not, what other pathways must we create?
Accepted papers:
Session 1Christopher O'Neill (Deakin University)
Long abstract:
Human-in-the-loop (HITL) models of human-automation interaction position a human as taking a meaningful ‘decision-making’ role in any automated system. However, there are fears that the HITL has simply become a way to "rubber-stamp" or authorise the outputs of automated systems without allowing for meaningful human input, or else a convenient figure to take the blame for the outputs of flawed systems over which humans have no meaningful control (see Elish, 2020; Ganesh, 2020; Perrow, 1984).
As a way of reconceptualising the figure of “the human” in automated systems, I argue for a critical reconsideration of a theoretical tradition which is often overlooked in the Anglophone academy – that of Francophone Work Studies. Researchers like Véronique de Keyser (1976) and Yves Clot (1997) pursued a path of research which, drawing from structuralist analysis of the "catachrestic" misuse of language, emphasised the creative capacity of workers to redefine work processes in response to unexpected changes or challenges. Here the HITL’s capacity for misuse and for error is not simply a danger to be eradicated but, in the right circumstances, a source of creative invention. Moreover, this approach demands that research go beyond an abstract and schematic model of the HITL. The question of responsibility and agency must be investigated by placing the HITL within the workplace understood as a social space, one in which the "meaning" of work and intervention is produced not by an idealised "human", but in the complex relation between workers, technologies, and organisational infrastructures.
Johan Irving Søltoft (Technical University of Denmark)
Long abstract:
Traditionally, the film industry relied on feedback from audiences during test screenings of completed films. Now, there's a shift toward involving the audience early on in the production process, aiming to incorporate more diverse opinions. Digital and computational methods are increasingly recognized as crucial in enabling this change.
This paper draws on an ethnographic study conducted at a Danish film consultancy that uses digital ethnography tools to analyse audience awareness for early-stage films. Through participant observation, the study delves into the consultancy's methods for determining which audiences are relevant, and how to curate their responses into actionable insights. This process inherently involves privileging certain 'versions' of the audience while marginalising others, thereby raising critical questions about whose voices are amplified and whose are suppressed in the quest for human-centered film production.
The study examines the methods developed by the consultancy firm, including mobile ethnography for engaging global audiences, automatic transcription for processing interviews, emotion detection algorithms applied to transcribed data, and generative AI in order to showcase different practices of constructing the audience for the movie production process.
This contribution aims to critically examine how specific versions of the audience are enacted in the pursuit of human-centered innovation in the European film industry.
Thea Sofie Skjødt Engstrøm (Aarhus University)
Long abstract:
This paper explores human-centered innovation in a recent interdisciplinary Danish EdTech project. The project brought together EdTech entrepreneurs, education researchers, computer scientists, and primary school teachers to develop an automated tool for data-driven early writing assessment. The developmental involvement of teachers was understood as important for addressing trust and integration issues with AI and machine learning in education (Simonsen, 2020). By incorporating teachers’ visions of future-use, the team inscribed both anticipated use in different situations and distinctive understandings of teachers-as-users in the tool’s design (Akrich 1992). Significantly, this was tempered by the recognition that the system needed a high degree of adaptability to local contexts that cannot be predicted in advance. Interactions between user-limitations and contextual malleability would ideally create a sense of unity and collaboration between teachers and tools in decision-making processes (Savolainen & Ruckenstein, 2022). However, aligning the systems visual output with formative educational goals presented challenges, as it might seem to mimic existing summative tools, risking misunderstandings of what the tool offered and threatening successful collaborations. The team feared that teachers would exclude themselves from the center of innovation through a contextually salient overestimation of the systems capacities. I thus argue that the project enacts ambiguous imaginaries of inscribed limitations and contextual adaptation in more-than-human collaboration between teachers and automated tools. As such, I propose a co-evolutionary approach to innovation that preserves human expertise despite anticipated misuse while promoting shared albeit contested decision-making and collaboration between human and nonhuman actors in algorithmic systems (Seaver, 2017)
Henriette Langstrup (University of Copenhagen)
Long abstract:
Patient-reported outcomes (PRO) data are data on an individual patient’s health-related functioning and quality of life, reported through questionnaires as part of a treatment trajectory. Increasingly administered through digital platforms, these tools aim to record and utilize patients’ subjective experiences to improve shared decision-making, to prioritize activities and to inform management decisions in healthcare. In healthcare policy, PRO-data has come to feature as a form of feasible and rational way of “giving voice” to “what matters to patients” while increasing efficiency and effectiveness.
More recently, in our current age of AI-hope and hype, PRO-data has moreover become attached in interesting ways to discussions of patient-centered AI. In this paper, I will take my point of departure from propositions made in digital health literature suggesting: 1) that the inclusion of PRO-data in AI model training “is a critical part of the humanization of AI for health”, 2) that AI should be used to transform PRO-data into meaningful narratives to increase clinicians’ engagement, and 3) that emphatic AI chatbots should be used to collect PRO-data more efficiently from patients.
What understandings of “voice”, “humanization” and “patient participation” are at play in these visions for patient data and AI in healthcare? How have the synthetic voices of patients in the form of PRO-data become so valuable for healthcare systems and digital health innovation, while other versions of patient voices seem neglected? What may be the consequences for individual patients and health systems at large of mobilizing these synthetic voices?
Olfa Chelbi Carole-Anne Tisserand (Mines Paristech)
Long abstract:
The involvement of users in human-centered innovation processes is a widespread practice acknowledged as a means to gain deeper insights into market needs. Personas, fictional user representations, are utilized in these approaches to substitute real users, offering a cost-effective method that fosters discussions and empathy. Despite recognized benefits, personas have faced criticism for being constructed on market assumptions rather than robust quantitative data, resulting in elastic user portraits aligned with service providers' projections.
This communication delves into the standardization of personas within innovation workshops. Immersed in different domains—an implementing regional council for digital innovation policies and a banking firm developing new digital services—we uncovered surprisingly personas portraying similar digital user figures. These personas, with slight variations, share a common passion for digital services, social networks, and occasional sensitivity to social and environmental causes. They also usually occupy senior positions. This prompts an initial inquiry: how have these personas proliferated among these two different domains, what processes have shaped their appearance in comparable forms and how do they transform during the innovation process?
Based on a STS approach, our study examines persona creation and transformation across workshop phases, that stems from the interactions with facilitators and participants. We show that it creates a certain image of who is the user of these services. Finally, we question the limits of this approach that leads to a standardization and a large scale diffusion of user figures across diverse sectors.
Jurate Kavaliauskaite (Vilnius University)
Long abstract:
The Silicon Valley and its tech industries have drawn attention as the major hotbed of present-day technological utopianism or futurism, entrenched techno-solutionism and transhumanist visions (e.g. Tutton, 2021; Morozov, 2010; Huesemann, 2011; Bunn, 2022). Nevertheless, their global salience and positioning vis-à-vis world-wide consumer markets, at the same time, elicit less dazzling, mundane, down-to-earth public/stakeholder-oriented considerations and corporate sociotechnical imaginaries, (Mager & Katzenbach, 2020; Hockenhull, 2021) that address, generate meaning(s) and perform the advocacy of ongoing disruptive innovation in relationship to society and the current human condition, but are rarely systematically studied. Based on the case study of Google’s corporate initiatives and discourses over nearly a decade, my paper examines the intricate ideas of ‘human’ and ‘human-centeredness’, constructed on the wake of and vis-à-vis the exponential growth of intelligent automation and generative AI across the ‘big tech’s’ global ecosystem of digital services, products and infrastructures. Theoretically drawing on N. Katherine Hayles’ concept of ‘cognitive assemblages’ (Hayles, 2016, 2017), the paper argues that and demonstrates how Google’s enactment of humanness is peculiar, complicated and problematic. On the one hand, humans as tech consumers are enacted as ‘cognizers’, whose identity is inextricably intertwined with and projected upon the corporate designs of ‘cognitive non-conscious’ (of AI); on the other hand, the former entanglement as well as differences between human and non-human cognition are supressed and concealed, bringing forward the instrumental, utilitarian understanding of intelligent technologies that seeks to retain the modernist hierarchies of knowing/-ers and autonomy of the improvement-seeking human.
Radhika Gorur (Deakin University)
Long abstract:
Education is an inherently future-oriented and humanistic enterprise – children are seen as ‘citizens of tomorrow’ and the job of education institutions is seen as preparing young people for their – and the planet’s – futures. Rapid advances in technology have resulted in a series of high-profile reports which project ideas about what the future will hold for education and for the young people in educational institutions today. These imagined futures are determining how education systems adapt in the present to accommodate, negotiate and ‘thrive’ in these futures.
This paper examines the imagined techno-futures in key reports from UNESCO, OECD and the World Bank to understand how these imaginaries are reconfiguring both the machine and the human. The OECD sees AI and humans as engaged in a race of knowledge and skills. While AI can keep learning and improving, humans can’t or don’t always do so – and they are thus under threat of being overtaken by AI. Similarly, the World bank places technologies and humans in competition with each other; technologies are to be tamed and colonised to serve humanity. Finally, UNESCO argues that we need to “relearn our interdependencies” and understand “our human place and agency in a more-than-human world.”
Together, these reports provide clues to how the human is being recast, reimagined, and enacted in imagined techno-futures. This paper examines and anticipates the impact of these enactments on different aspects of education.