Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Wenzel Mehnert
(Technische Universität Berlin)
Nele Fischer (Technische Universität Berlin)
Sabine Ammon (TU Berlin)
Send message to Convenors
- Format:
- Combined Format Open Panel
- Location:
- Theater 5, NU building
- Sessions:
- Tuesday 16 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
The doing and making of ethics within technology development raises central questions on the role of STS and the normativities involved. This panel invites scholars and practitioners to exchange experiences, reflections and questions. It contributes to the ongoing debate and self-understanding.
Long Abstract:
The development of technologies is entangled with promises of a better world and concerns about undesired effects (like discrimination). Their development is prefigured by the implicit ethics of developers and the conceptual framing of the development process. The respective values inscribed influence the potential implications. Accordingly, integrating ethical reflections already within the early phases of technological development offers an entry point to envision not only a technology, but also its ethical foundations and implications. The resulting insights can be reintegrated into the development process to foster responsible technology designs.
Integrating ethics is a highly debated topic in doing STS. It raises central practical questions on the role of the ethicist, the normativities engaged and the active positions towards making and doing transformation of and within technological development.
This combined format open panel invites scholars and practitioners to exchange their experiences, reflections and strategies with the following questions:
a) Who should integrate ethical reflection in technology development? What is the role of the STS scholar or ethicist?
b) How is an integration of ethics (not) possible? What are experiences with approaches, tools, …?
c) How are you tackling implicit value systems (including your own)? What are the points of reference for ethical evaluation?
During the session, all contributors act as panelists in a discussion facilitated by the panel organizers. We start with each participant introducing their work and background in 3-5min. The main part of the session revolves around the (tentative) answers and thoughts to the questions. Emerging insights are captured live and discussed at the end. The audience is warmly invited to join with questions and statements.
If you would like to participate, your abstract should contain:
- a short reflection on the questions above
- a brief overview on your project / work
- a short biography
Accepted contributions:
Session 1 Tuesday 16 July, 2024, -Short abstract:
Drawing on over four years of experience developing the Embedded Ethics and Social Sciences methodology in the field of health AI, I reflect on how we have integrated the analysis of ethical and social issues in a dynamic, practice-oriented way.
Long abstract:
Ethical concerns around artificial intelligence technology have prompted a rush towards ‘AI ethics’ to consider how AI technology can be developed and implemented in an ethical manner. Since 2019, I have been working with my colleagues in the Munich Embedded Ethics Lab (MEEL) at the Technical University of Munich to develop the Embedded Ethics and Social Science Approach (EESS) to address established and emerging concerns with AI. EESS denotes the practice of integrating the consideration of social, ethical and legal issues into the entire AI development process in a deeply integrated, collaborative and interdisciplinary way. Our approach combines methodological and conceptual frameworks from bioethics and the social sciences (in particular STS) to help interdisciplinary development teams anticipate harmful effects and suggest new ways of thinking about ethical and social challenges during development processes.
We have accumulated over four years of experience through seven Embedded Ethics and Social Science projects within various interdisciplinary consortia in the field of health AI. By applying a wide range of methods (ranging from stakeholder interviews to bias analysis to participant observation), we have learned how to incorporate the analysis of ethical and social issues into AI projects in a dynamic, practice-oriented way.
Short abstract:
Technologies materialise arguments for certain forms of life over others. STS scholars are suited to clarify what is at stake in their design and deployment. Rhetorical design analysis in STS, unbound from purely technical expertise, can help produce more democratic modes of critical transparency.
Long abstract:
I work with the assumption that all designed objects communicate a rhetorical argument for certain modes of doing, behaving, thinking and feeling in the world. This is simply by virtue of the fact that something has been designed this way, rather than that. As such, those involved in the design of technologies must be cognizant of the types of arguments they are materialising. As these arguments are unavoidable, clarification is key. STS scholars can help articulate the parameters of these arguments, explicate blind spots, and envision alternatives - during the design process, or after public release. This is a form of critical ethical transparency beyond purely technical transparency.
I have recently been working with Foucault’s ethical writings to (re)consider the subject positions we assume when we enter into rhetorical dialogue with technologies. How is this technology materialising its ideal user? What is it asking of us? How does this limit conduct? And at the expense of what? These critical ethical questions exist on the ontological terrain of the individual, rather than in technical expertise or in philosophical a prioris. Here, ethical evaluation becomes democratically constituted, offering a chance to refuse the potentially homogenising and hegemonic impacts of ‘ethical’ technological interventions imagined by corporate, capitalist, state, and other powerful actors today.
I am a Lecturer in Data, AI and Society in the Information School, University of Sheffield. I research the ethical design and deployment of digital technologies, specialising in sociotechnical theory, power, digital well-being, and the philosophy of interdisciplinary collaborations.
Short abstract:
Prospective Technology Assessment integrates explicitly ethical considerations in early phases of science and technology development. The fields of applica-tion were so far nuclear and other energy technology research, nano-technosciences, synthetic biology, and AI. Experiences will be discussed.
Long abstract:
The concept of Prospective Technology Assessment (ProTA) integrates ethical consideration in early phases of science and technology development – and it aims to reach the research practioneers, technology developpers, politicians, science managers as well as the public. Ethics as such starts with the awareness and recognition of ambivalence of science and of science-based technology. In order to find ways out of the ambivalence, concrete ethical considerations are necessary. ProTA includes elements of the most common concepts of ethics and integrates these in a certain way, as will be shown. It draws on Hans Jonas’ deontological principle of responsibility, which is related to a “heuristic of fear” and a “preservation principle”. Is aims at achieving a “conservative” preservation of our life world and “genuine human life”. The second background is related to Ernst Bloch’s utopian principle of hope, which addresses the “open horizon” of the future. Such an “unfolding principle” is aimed at an “alliance technology” which serves mankind and is concurrently in harmony with nature. Bloch’s approach can be related to some aspects of discourse ethics.
During the last decade a number of ProTA-based projects, that integrate ethical reflec-tion into early phases of science and technology development, have been conducted by adopting the ProTA guidelines and the orientation framework, e.g.: (1) nuclear tech-nology research, (2) energy research, (3) nano-technosciences, (4) synthetic biology, and (5) AI. – We would be happy to exchange our experiences concerning the concep-tual reflections as well as the application of the ProTA concept in concrete projects.
Short abstract:
Voice-based chatbots have the potential to make great strides in performing clinical tasks, but without proper oversight, ethical, legal, and social implications may prevail in their deployment. We ask experts to identify the ethical demands required for the design and usage of such a technology.
Long abstract:
Using voice as a biomarker for clinical support has been anticipated to provide an easy, cost-effective, and non-invasive means of collecting health data, and when paired with artificial intelligence (AI), may significantly increase accuracy in diagnosis, predicting, and monitoring interventions. Vocal biomarkers, coupled with the 24/7 availability, individualized support, and remote application provided by AI-powered chatbots, voice-based chatbots have the potential to improve therapeutic care. However, the design and usage of voice-based chatbots for mental health can bring about a myriad of ethical, legal, and social implications (ESLIs), such as providing inadequate support and guidance due to bias in the design, lack of data protection and privacy regulations, and overreliance on the technology itself. The need for ethical guidelines for oversight is imperative, but how can one foster a responsible governance framework for such voice-based chatbots?
We propose an anticipatory ethics approach using a Delphi process to identify the ELSIs in the advent of such a technology. A panel of experts from varying disciplines along with community representatives will be tasked to ‘voice’ their opinions on the values they believe should be central to such a technology’s design and usage. The results will help explore what constitutes trustworthy voice-based chatbots and to what extent can therapeutic tasks be replaced by AI. By involving key stakeholders in the early stages of supporting ethical guidelines for voice-based mental health chatbots, we are optimistic this will help avoid ELSIs in the design and development of these technologies downstream when deployed in clinical settings.
Short abstract:
This contribution shares three in-progress prototypes that aim to integrate ethical reflection into the tech development processes of early-stage technology startups, developed via co-participatory action research and centering dignity.
Long abstract:
This contribution reflects upon co-participatory action research conducted over 2021-2023 with three early-stage technology startups and one tech ecosystem-enabling organization who desire to "be responsible". Grounded in complex adaptive systems, I suggest that ethical reflection in technology development needs to occur in the earliest stages of an organization's formation because systems such as an organization are sensitive to initial starting conditions. Further, drawing upon second-order cybernetics, technologists are not passively observing the systems they create but are a part of their formation. As such, this work assumes that the co-founders and technology builders themselves need to conduct ethical reflection, tackling their implicit value systems as they do so. My role, as STS scholar, is one of co-reflector, co-participator and intervenor in over 25 participatory workshops across these organizations. This work forms part of my PhD and builds upon a decade of industry experience advising in organizational strategy and culture. Together, we prototyped three different ways to integrate ethics into tech development centering dignity: an algorithmic review and design framework that centers dignity, a reflection tool which surfaces implicit values, and a process for developing responsible AI pledges. Further, to engage with ethical evaluation, the organizations experimented with using Generative AI to assist in holding themselves to account to their values during coding and product development processes. This work provides a rich context to further discuss and exchange ideas and experiences around what it could mean to integrate ethics into early technology development.
Short abstract:
Project: To integrate ethical values in technology development a structured collaboration between users and creators is necessary. Our tool Sensorkit enables this in terms of privacy in the home. Users gather sensor data in their home combined with individual and collaborative ethical reflection.
Long abstract:
Q1: Ethical reflection on technology is a shared responsibility of creators and users. STS scientists and ethicists play a key role in identifying and defining ethical values, facilitating communication and raising awareness. However, Berger et al. (2023) point out that even technologies with good intentions can have unintended negative consequences. Therefore, a structured collaboration is required to create value-driven participatory technology design frameworks.
Q2: Our project "Simplications" expands on Kurze et al.'s (2020) use of a Sensorkit to engage participants in data interpretation and employs participatory design to explore smart home privacy. This approach has been instrumental in identifying potential abuses (Berger et al., 2023). We develop an additional workbook to encourage ethical reflection. A prototype tool is available for discussion.
Q3: We voluntarily applied for a University's ethics approval. Furthermore, our interdisciplinary team steers reflections through different lenses. At international workshops, the Sensorkit is combined with ethical reflection canvases. We also reach out to a wide range of citizens to gather insights: Your call for participation inspired us to facilitate an internal workshop about our own ethical values.
Members of the project: Alexa Becker is a researcher at Anhalt University of Applied Sciences, Karola Köpferl is a PhD candidate at TU Chemnitz, Andy Börner is a researcher at TU Chemnitz, Albrecht Kurze is a post-doctoral researcher at TU Chemnitz, Arne Berger is a professor at Anhalt University of Applied Sciences and Andreas Bischof is a Juniorprofessor at TU Chemnitz.
Short abstract:
We collaborate on the problem of presuppositions in user modelling from an ethical and a developer´s perspective. In our work, we try to establish a VSD-orientated approach of the integration of ethical reflection on implicit assumptions and values into the technical implementation of user models.
Long abstract:
Our collaboration is set in the context of the SFB/TRR 318 "Constructing Explainability" that deals with the challenges of explainable AI.
In this context, user modelling emerges as a central issue for tailored communication between humans and machines. However, it raises certain ethical concerns as it inherently relies on strong presuppositions about human communication and personality. For example, models from psychology are frequently used and assessed via machine learning techniques, a practice accelerated by technological advancements, yet without sufficient contextual reflection. Consequently, these presuppositions, embedded in the development process, can manifest not only as implicit but also as unexamined and tacit assumptions. Tackling implicit value systems is therefore a central point in an ethical reflection in the development and implementation of user models. To make technical design ethical, the role of the ethicist is to provide ethical expertise and skills (e. g. making implicit assumptions explicit, initiating a change of perspective) whereas the part of the developers is to integrate those insights into practice, by acknowledging the ethically challenging role of tacit assumptions as well as by using them responsibly, i.e. in consideration of possible risks. The approach we choose for addressing these challenges is value-sensitive design (VSD).
Martina Philippi wrote her dissertation in philosophy on the phenomenology of tacit assumptions. Before joining the TRR318, she did ethical accompanying research in a project on rescue robotics with VSD.
Dimitry Mindlin is writing his dissertation in computer science on explaining black box machine learning models in co-constructive dialogues in the TRR318.
Short abstract:
In our contribution, we will share our reflections on the crucial role of the tacit knowledge in bridging the gap between academia and software developers in a research project that aims to address the gap between research and practice in AI Ethics.
Long abstract:
Our research project aimed to bridge the gap between AI Ethics research and its practical application. To enhance the accessibility of ethical considerations in AI, we opted for a web-based platform commonly used by software companies for digital documentation. The development process of this tool presented us with challenges and opportunities to reflect on the pivotal role of tacit knowledge in connecting academia and software developers.
We developed the AI Ethics Tool, a pragmatic framework designed to incorporate ethical considerations into the development and deployment of AI systems. This tool addresses challenges such as bias, unfairness, and lack of transparency in AI systems. It underscores the importance of involving diverse stakeholders and addresses gaps in AI ethics research.
Our research is a step towards responsible AI practices. Throughout the process of developing the tool, we identified the moments in software development timeline where there were opportunities for intervention and provided a relatable yet research-based framework enabling practitioners to engage with the normative challenges in AI development in an accessible and intuitive platform.