Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Wenzel Mehnert
(Technische Universität Berlin)
Nele Fischer (Technische Universität Berlin)
Sabine Ammon (TU Berlin)
Send message to Convenors
- Format:
- Combined Format Open Panel
Short Abstract:
The doing and making of ethics within technology development raises central questions on the role of STS and the normativities involved. This panel invites scholars and practitioners to exchange experiences, reflections and questions. It contributes to the ongoing debate and self-understanding.
Long Abstract:
The development of technologies is entangled with promises of a better world and concerns about undesired effects (like discrimination). Their development is prefigured by the implicit ethics of developers and the conceptual framing of the development process. The respective values inscribed influence the potential implications. Accordingly, integrating ethical reflections already within the early phases of technological development offers an entry point to envision not only a technology, but also its ethical foundations and implications. The resulting insights can be reintegrated into the development process to foster responsible technology designs.
Integrating ethics is a highly debated topic in doing STS. It raises central practical questions on the role of the ethicist, the normativities engaged and the active positions towards making and doing transformation of and within technological development.
This combined format open panel invites scholars and practitioners to exchange their experiences, reflections and strategies with the following questions:
a) Who should integrate ethical reflection in technology development? What is the role of the STS scholar or ethicist?
b) How is an integration of ethics (not) possible? What are experiences with approaches, tools, …?
c) How are you tackling implicit value systems (including your own)? What are the points of reference for ethical evaluation?
During the session, all contributors act as panelists in a discussion facilitated by the panel organizers. We start with each participant introducing their work and background in 3-5min. The main part of the session revolves around the (tentative) answers and thoughts to the questions. Emerging insights are captured live and discussed at the end. The audience is warmly invited to join with questions and statements.
If you would like to participate, your abstract should contain:
- a short reflection on the questions above
- a brief overview on your project / work
- a short biography
Accepted contributions:
Session 1Daniela Boraschi (Kavli Centre for Ethics, Science, and the Public - University of Cambridge)
Short abstract:
This paper offers a new empirically grounded concept of ‘affordances for ethics in science’. The focus is on how material and experiential aspects of lab-based collaboration enable or inhibit scientists in addressing ethical questions.
Long abstract:
This paper explores ethics in science through an ongoing case study of an AI lab, offering a new empirically grounded concept of ‘affordances for ethics in science'. As AI evolves from a technical challenge to a domain requiring alignment with moral principles and societal values, collaboration between scientists, ethicists, and social scientists becomes increasingly important. While STS scholarship has predominantly approached these collaborations from an epistemological perspective, the structural conditions that enable or hinder collaborations have not been fully examined. I argue that Gibson’s (1979) and Norman’s (1988) concept of 'affordances' is useful to understand not only socio-technical infrastructures but also emerging collaborative practices on the ethics of AI research. To support my argument, first, I review the ways in which the concept of 'affordances' has been used in psychology, design, education, and anthropology. Second, I use the concept of ‘affordances for ethics in science’ to analyse ethnographic data from a collaborative writing workshop, a peer-review process, and a series of online meetings. The focus is on examining how material and experiential aspects of these practices (e.g. technologies, spaces, and events) enable or inhibit scientists in identifying and addressing ethical questions and, in turn, shape the nature of ethics themselves. The aim is to offer a new perspective for STS researchers venturing into the field of AI research.
Amelia Fiske (Technical University of Munich)
Short abstract:
I have sent a document with my full answers to Mr. Wenzel Mehnert; I guess I have too much to say to answer all three questions in 250 words!
Long abstract:
Ethical concerns around artificial intelligence technology have prompted a rush towards ‘AI ethics’ to consider how AI technology can be developed and implemented in an ethical manner. Since 2019, I have been working with my colleagues in the Munich Embedded Ethics Lab (MEEL) at the Technical University of Munich to develop the Embedded Ethics and Social Science Approach (EESS) to address established and emerging concerns with AI. EESS denotes the practice of integrating the consideration of social, ethical and legal issues into the entire AI development process in a deeply integrated, collaborative and interdisciplinary way. Our approach combines methodological and conceptual frameworks from bioethics and the social sciences (in particular STS) to help interdisciplinary development teams anticipate harmful effects and suggest new ways of thinking about ethical and social challenges during development processes.
We have accumulated over four years of experience through seven Embedded Ethics and Social Science projects within various interdisciplinary consortia in the field of health AI. By applying a wide range of methods (ranging from stakeholder interviews to bias analysis to participant observation), we have learned how to incorporate the analysis of ethical and social issues into AI projects in a dynamic, practice-oriented way.
Niall Docherty (University of Sheffield)
Short abstract:
Technologies materialise arguments for certain forms of life over others. STS scholars are suited to clarify what is at stake in their design and deployment. Rhetorical design analysis in STS, unbound from purely technical expertise, can help produce more democratic modes of critical transparency.
Long abstract:
I work with the assumption that all designed objects communicate a rhetorical argument for certain modes of doing, behaving, thinking and feeling in the world. This is simply by virtue of the fact that something has been designed this way, rather than that. As such, those involved in the design of technologies must be cognizant of the types of arguments they are materialising. As these arguments are unavoidable, clarification is key. STS scholars can help articulate the parameters of these arguments, explicate blind spots, and envision alternatives - during the design process, or after public release. This is a form of critical ethical transparency beyond purely technical transparency.
I have recently been working with Foucault’s ethical writings to (re)consider the subject positions we assume when we enter into rhetorical dialogue with technologies. How is this technology materialising its ideal user? What is it asking of us? How does this limit conduct? And at the expense of what? These critical ethical questions exist on the ontological terrain of the individual, rather than in technical expertise or in philosophical a prioris. Here, ethical evaluation becomes democratically constituted, offering a chance to refuse the potentially homogenising and hegemonic impacts of ‘ethical’ technological interventions imagined by corporate, capitalist, state, and other powerful actors today.
I am a Lecturer in Data, AI and Society in the Information School, University of Sheffield. I research the ethical design and deployment of digital technologies, specialising in sociotechnical theory, power, digital well-being, and the philosophy of interdisciplinary collaborations.
Wolfgang Liebert (University of Natural Resources and Life Sciences BOKU Vienna) Jan Cornelius Schmidt (Darmstadt University of Applied Sciences)
Short abstract:
Prospective Technology Assessment integrates explicitly ethical considerations in early phases of science and technology development. The fields of applica-tion were so far nuclear and other energy technology research, nano-technosciences, synthetic biology, and AI. Experiences will be discussed.
Long abstract:
The concept of Prospective Technology Assessment (ProTA) integrates ethical consideration in early phases of science and technology development – and it aims to reach the research practioneers, technology developpers, politicians, science managers as well as the public. Ethics as such starts with the awareness and recognition of ambivalence of science and of science-based technology. In order to find ways out of the ambivalence, concrete ethical considerations are necessary. ProTA includes elements of the most common concepts of ethics and integrates these in a certain way, as will be shown. It draws on Hans Jonas’ deontological principle of responsibility, which is related to a “heuristic of fear” and a “preservation principle”. Is aims at achieving a “conservative” preservation of our life world and “genuine human life”. The second background is related to Ernst Bloch’s utopian principle of hope, which addresses the “open horizon” of the future. Such an “unfolding principle” is aimed at an “alliance technology” which serves mankind and is concurrently in harmony with nature. Bloch’s approach can be related to some aspects of discourse ethics.
During the last decade a number of ProTA-based projects, that integrate ethical reflec-tion into early phases of science and technology development, have been conducted by adopting the ProTA guidelines and the orientation framework, e.g.: (1) nuclear tech-nology research, (2) energy research, (3) nano-technosciences, (4) synthetic biology, and (5) AI. – We would be happy to exchange our experiences concerning the concep-tual reflections as well as the application of the ProTA concept in concrete projects.
Scott Lear Rouleau Geneviève (Université du Québec en Outaouais) Nathanael Siaoman (Simon Fraser University) Zoha Khawaja (Simon Fraser University) Jean-Christophe Belisle-Pipon (Simon Fraser University) Hortense Gallois (Simon Fraser University)
Short abstract:
Voice-based chatbots have the potential to make great strides in performing clinical tasks, but without proper oversight, ethical, legal, and social implications may prevail in their deployment. We ask experts to identify the ethical demands required for the design and usage of such a technology.
Long abstract:
Using voice as a biomarker for clinical support has been anticipated to provide an easy, cost-effective, and non-invasive means of collecting health data, and when paired with artificial intelligence (AI), may significantly increase accuracy in diagnosis, predicting, and monitoring interventions. Vocal biomarkers, coupled with the 24/7 availability, individualized support, and remote application provided by AI-powered chatbots, voice-based chatbots have the potential to improve therapeutic care. However, the design and usage of voice-based chatbots for mental health can bring about a myriad of ethical, legal, and social implications (ESLIs), such as providing inadequate support and guidance due to bias in the design, lack of data protection and privacy regulations, and overreliance on the technology itself. The need for ethical guidelines for oversight is imperative, but how can one foster a responsible governance framework for such voice-based chatbots?
We propose an anticipatory ethics approach using a Delphi process to identify the ELSIs in the advent of such a technology. A panel of experts from varying disciplines along with community representatives will be tasked to ‘voice’ their opinions on the values they believe should be central to such a technology’s design and usage. The results will help explore what constitutes trustworthy voice-based chatbots and to what extent can therapeutic tasks be replaced by AI. By involving key stakeholders in the early stages of supporting ethical guidelines for voice-based mental health chatbots, we are optimistic this will help avoid ELSIs in the design and development of these technologies downstream when deployed in clinical settings.
Lorenn Ruster (Australian National University)
Long abstract:
This contribution reflects upon co-participatory action research conducted over 2021-2023 with three early-stage technology startups and one tech ecosystem-enabling organization who desire to "be responsible". Grounded in complex adaptive systems, I suggest that ethical reflection in technology development needs to occur in the earliest stages of an organization's formation because systems such as an organization are sensitive to initial starting conditions. Further, drawing upon second-order cybernetics, technologists are not passively observing the systems they create but are a part of their formation. As such, this work assumes that the co-founders and technology builders themselves need to conduct ethical reflection, tackling their implicit value systems as they do so. My role, as STS scholar, is one of co-reflector, co-participator and intervenor in over 25 participatory workshops across these organizations. This work forms part of my PhD and builds upon a decade of industry experience advising in organizational strategy and culture. Together, we prototyped three different ways to integrate ethics into tech development centering dignity: an algorithmic review and design framework that centers dignity, a reflection tool which surfaces implicit values, and a process for developing responsible AI pledges. Further, to engage with ethical evaluation, the organizations experimented with using Generative AI to assist in holding themselves to account to their values during coding and product development processes. This work provides a rich context to further discuss and exchange ideas and experiences around what it could mean to integrate ethics into early technology development.
Alexa Becker (Anhalt University of Applied Sciences) Andy Börner (Chemnitz University of Technology) Arne Berger (Anhalt University of Applied Sciences) Karola Köpferl (Chemnitz University of Technology) Andreas Bischof (Chemnitz University of Technology) Albrecht Kurze (Chemnitz University of Technology)
Short abstract:
Project: To integrate ethical values in technology development a structured collaboration between users and creators is necessary. Our tool Sensorkit enables this in terms of privacy in the home. Users gather sensor data in their home combined with individual and collaborative ethical reflection.
Long abstract:
Q1: Ethical reflection on technology is a shared responsibility of creators and users. STS scientists and ethicists play a key role in identifying and defining ethical values, facilitating communication and raising awareness. However, Berger et al. (2023) point out that even technologies with good intentions can have unintended negative consequences. Therefore, a structured collaboration is required to create value-driven participatory technology design frameworks.
Q2: Our project "Simplications" expands on Kurze et al.'s (2020) use of a Sensorkit to engage participants in data interpretation and employs participatory design to explore smart home privacy. This approach has been instrumental in identifying potential abuses (Berger et al., 2023). We develop an additional workbook to encourage ethical reflection. A prototype tool is available for discussion.
Q3: We voluntarily applied for a University's ethics approval. Furthermore, our interdisciplinary team steers reflections through different lenses. At international workshops, the Sensorkit is combined with ethical reflection canvases. We also reach out to a wide range of citizens to gather insights: Your call for participation inspired us to facilitate an internal workshop about our own ethical values.
Members of the project: Alexa Becker is a researcher at Anhalt University of Applied Sciences, Karola Köpferl is a PhD candidate at TU Chemnitz, Andy Börner is a researcher at TU Chemnitz, Albrecht Kurze is a post-doctoral researcher at TU Chemnitz, Arne Berger is a professor at Anhalt University of Applied Sciences and Andreas Bischof is a Juniorprofessor at TU Chemnitz.
Martina Philippi (Paderborn University) Dimitry Mindlin (Bielefeld University)
Short abstract:
We collaborate on the problem of presuppositions in user modelling from an ethical and a developer´s perspective. In our work, we try to establish a VSD-orientated approach of the integration of ethical reflection on implicit assumptions and values into the technical implementation of user models.
Long abstract:
Our collaboration is set in the context of the SFB/TRR 318 "Constructing Explainability" that deals with the challenges of explainable AI.
In this context, user modelling emerges as a central issue for tailored communication between humans and machines. However, it raises certain ethical concerns as it inherently relies on strong presuppositions about human communication and personality. For example, models from psychology are frequently used and assessed via machine learning techniques, a practice accelerated by technological advancements, yet without sufficient contextual reflection. Consequently, these presuppositions, embedded in the development process, can manifest not only as implicit but also as unexamined and tacit assumptions. Tackling implicit value systems is therefore a central point in an ethical reflection in the development and implementation of user models. To make technical design ethical, the role of the ethicist is to provide ethical expertise and skills (e. g. making implicit assumptions explicit, initiating a change of perspective) whereas the part of the developers is to integrate those insights into practice, by acknowledging the ethically challenging role of tacit assumptions as well as by using them responsibly, i.e. in consideration of possible risks. The approach we choose for addressing these challenges is value-sensitive design (VSD).
Martina Philippi wrote her dissertation in philosophy on the phenomenology of tacit assumptions. Before joining the TRR318, she did ethical accompanying research in a project on rescue robotics with VSD.
Dimitry Mindlin is writing his dissertation in computer science on explaining black box machine learning models in co-constructive dialogues in the TRR318.
Sabrina Blank (University of Lübeck) Christian Herzog (University of Lübeck)
Long abstract:
A meaningful integration of ethics into technology development requires addressing open questions about methodological approaches, interdisciplinarity, and responsibility. We report on a strategic integration of ethics into a development project that was able to successfully provide starting points for operational action. To achieve this, we established an interdisciplinary, participatory collaboration between a development team for medical artificial intelligence, an embedded ethicist, and an external technoethicist—a strategy that we denote the double-tiered approach to integrating ethics. We reflect on the constellation of roles within a series of workshops we conducted for ethical analysis of the socio-technical ecosystem, how this contributed to the effectiveness of the participatory procedure, facilitated interdisciplinary collaboration, and the challenges faced in the allocation of responsibility.
We argue that the double-tiered approach to integrating ethics was essential in facilitating the necessary exchange between the technical and ethical domains, enabling the identification of project-specific ethical implications. The embedded ethicist provided ethical expertise and as well as organized and moderated the collaborative procedure. The external technoethicist provided objectivity on ethical adequacy and depth. This intertwining of embeddedness and external objectivity may mitigate the often-discussed frictions between objective critical distance and identification with the development goals. We are convinced that these results provide a starting point for discussing how to deal with implicit value systems of ethicists. Is a double-tiered approach to integrating ethics economically and ethically viable? Does it contribute to actual and perceived trustworthy development processes? What other measures can ensure that superficial ethical evaluations are avoided?
Sarah Hladikova (Tufts University) Andreia Martinho (Tufts University) Yuling Wang (Tufts University)
Long abstract:
Our research project aimed to bridge the gap between AI Ethics research and its practical application. To enhance the accessibility of ethical considerations in AI, we opted for a web-based platform commonly used by software companies for digital documentation. The development process of this tool presented us with challenges and opportunities to reflect on the pivotal role of tacit knowledge in connecting academia and software developers.
We developed the AI Ethics Tool, a pragmatic framework designed to incorporate ethical considerations into the development and deployment of AI systems. This tool addresses challenges such as bias, unfairness, and lack of transparency in AI systems. It underscores the importance of involving diverse stakeholders and addresses gaps in AI ethics research.
Our research is a step towards responsible AI practices. Throughout the process of developing the tool, we identified the moments in software development timeline where there were opportunities for intervention and provided a relatable yet research-based framework enabling practitioners to engage with the normative challenges in AI development in an accessible and intuitive platform.