Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Giulia De Togni
(The University of Edinburgh)
Agnessa Spanellis (The University of Edinburgh)
Send message to Convenors
- Chairs:
-
Giulia De Togni
(The University of Edinburgh)
Agnessa Spanellis (The University of Edinburgh)
- Discussant:
-
Roger Andre Søraa
(NTNU)
- Format:
- Combined Format Open Panel
- Location:
- NU-5A57
- Sessions:
- Wednesday 17 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
AI innovation is led by developers and regulators, often excluding the public. Through exploring creative strategies, including visual arts and gamification, we invite discussion on how to promote responsible innovation while emphasizing diversity in knowledge and perspectives.
Long Abstract:
Current debates about the future of AI in our daily lives are dominated by developers and regulators, often excluding users and the public, who usually receive expert opinions without actively participating in innovation processes. This is particularly problematic in the realm of healthcare, as AI and robotics applications for healthcare raise significant concerns including sensitive data management, violation of user privacy, and safety risks (Bostrom 2003; Fenech et al. 2018; Lin et al. 2011; Sharkey and Sharkey 2011). Toward responsible innovation, we encourage a more inclusive and collaborative dialogue that engages with end users of these technologies.
We reframe user engagement as "boundary work" (Langley et al. 2019) which occurs at the intersections of diverse domains, professions, cultures, and communities. Instead of focusing on expertise (experts vs. non-expert), our framing emphasises diversity of knowledge, cultural backgrounds, and social perspectives. Effective boundary work involves transforming or creating shared knowledge (Carlile 2002) and collectively validating or negotiating it (Brown and Duguid 2001). At the core of this process is the concept of a "boundary object" (Star and Griesemer 1989), which serves as a common reference point to establish a shared context.
Our panel encourages participants to find such ‘boundary objects’ through creative strategies aimed at engaging the public in responsible AI innovation practices. We suggest that creative artifacts can function as boundary objects, facilitating shared understanding and knowledge exchange across diverse boundaries. We welcome both academic papers and interactive workshop-style contributions. Creative approaches may include visual arts performances and games. Both the papers and the workshop activities should elicit discussion on responsible innovation, user involvement and participation of the public in decision-making.
By combining academic papers with creative methods during the panel, we aim to foster meaningful conversations toward democratizing AI innovation and promoting collaboration among diverse stakeholders.
Accepted contributions:
Session 1 Wednesday 17 July, 2024, -Short abstract:
Stakeholders struggle to take full responsibility for the thical, Legal and Social Aspects of AI in Education. We explore ‘guidance ethics’ as a socio-technological ethical method to address this dilemma. This postphenomenological approach places a strong emphasis on stakeholder participation.
Long abstract:
Stakeholders are increasingly held accountable for ‘responsible’ AI in Education (AIED) (Dignum, 2021). This foremost means incorporating Ethical, Legal and Social Aspects (Fisher et al, 2006), but also addressing structural challenges such as platformisation (Poell, Nieborg, and Dijck, 2019) and the growing dominance of (big) tech (Sharon & Gellert, 2023; Kerssens & Van Dijck, 2021). However, stakeholders often lack the capabilities to evaluate and anticipate ELSA. One of the challenges here is the control dilemma: incorporating ELSA is easy when these aspects are not yet manifest, yet once we know them, they are difficult to change (Collingdridge, 1980).
We explore a socio-technological ethical method called ‘guidance ethics’ (Verbeek & Tijink, 2019) as an approach to address this dilemma. This postphenomenological approach places a strong emphasis on stakeholder participation.
Our research question is: ‘how can stakeholder participation based on guidance ethics contribute to responsible AIED?’ We collected data from five workshops in schools for secondary education in Flanders and two meetings of the Independent Advisory Board Smartschool .
Our results reveal discrepancies between expected effects of AIED as well as different perspectives on how to mitigate these. The efficiency gains for example (based on which technology often is purchased) were countered by many stakeholders. This information increases the understanding of the dynamic human-technology entanglements and improves the capabilities to incorporate ELSA. The method however often leaves us with the question ‘and now’? We link this question to one of our other findings: the struggle of stakeholders to take responsibility for ELSA.
Short abstract:
We discuss the 'shape-shifter' methodology from the Project "Shaping AI", enabling problematisation through idea exchange and mobilisation of situated expertise. We address its strengths and limitations for addressing participation challenges in responsible AI adoption within public services.
Long abstract:
AI adoption in public services is increasing (Dencik et al, 2019). However, those leading this adoption often lack the knowledge and confidence to address responsible AI issues (Dencik et al, 2022) raising questions about ethics, bias, transparency and accountability (Eubanks, 2018). While citizen and community engagement are considered crucial, participatory methods based on deliberative methodologies still struggle to widen participation to marginalised communities (Hintz et al, 2022).
In this paper, we present a methodological approach developed in the context of the Project “Shaping AI”: the 'shape-shifter’. Integrating controversy analysis (Marres, 2007; 2015; 2021) and design research (Jansen et al., 2015) this creative strategy engages peer communities of experts, defined as all those committed to genuine debate on AI issues (Funtowicz and Ravetz, 1997). It uses props and materials, enabling a collaborative analysis of AI and society controversies (Marres et al., forthcoming). We will explain how we used the method to facilitate the sharing of understanding, as well as knowledge exchange across a group of experts with diverse backgrounds (Gobbo et al. forthcoming). We will examine the strengths and limitations of the method and outline the next steps to articulate the experiences of communities impacted by AI use in public services. This effort aims to encourage participation in reimagining the adoption of AI innovations.
Short abstract:
Presenters will facilitate hands-on activities and reflections related to the use of healthcare artificial intelligence cases to promote healthcare professional and patient advisory group engagement to promote ethical implementation and governance.
Long abstract:
As applications of artificial intelligence (AI) are rapidly integrated into healthcare, there is a pressing need for educational content to prepare various end users for identifying, interrogating, and addressing AI-related ethical challenges. However, few pedagogical resources exist to support end users as they consider the ethical dimensions of healthcare AI. Involving highly technical elements and emerging regulatory structures, healthcare AI presents unique challenges to educators. Additionally, lack of transparency behind AI algorithms can limit opportunities to examine more nuanced ethical themes related to algorithmic biases and validation. These limitations have far-reaching implications when applied in practice, raising broader ethical concerns related to end-use and public trust, distribution of accountability for clinical decisions, and oversight of healthcare AI. While evidence supports the use of case-based learning in ethics education, the complexity of AI technologies demands careful consideration for how to integrate case-based learning into AI ethics engagement. Presenters will provide guidance on the development and use of AI ethics case studies we have used in AI education for healthcare professionals and a patient advisory group that advises a healthcare organization on ethical implementation of AI. Drawing on our experiences as educators and leading the patient advisory group, presenters will facilitate hands-on activities and reflections related to the use of AI cases to promote end user engagement. Attendees will be invited to discuss questions that can be iteratively modified to meet the needs of end users in their own contexts to promote responsible integration of AI, user involvement, and participation in governance.
Short abstract:
This talk will share experiences with using creative fiction exercises as tools for centering the engagement and empowerment of non-technical audiences in AI technology assessment and design.
Long abstract:
Constructive Technology Assessment techniques call for the use of real-time strategies to equip those involved in the processes of design with the instruments to make informed decisions regarding the ways in which values become embedded into technologies during the design process. Some of the earliest articulations of constructive technology assessment called for the need for “useful fictions” in the form of “socio-technical scenarios” that could guide design decisions in the face of uncertainty about outcomes of design choices. Buttressed by recent work calling for more radical forms of inclusivity in design processes (e.g., “Design Justice” and “Designs for the Pluriverse”), constructive technology assessment calls for an ethical design process that systematizes and formalizes modes of anticipatory governance. Extrapolative/speculative fiction writing can serve as a way of expanding sociotechnical imaginaries employed in design pedagogy to assist in the collective creation of more fair, just, and equitable technologies. This talk will share experiences with using creative fiction exercises as tools for the engagement of both technical and non-technical audiences in AI technology assessment.
Short abstract:
In this paper, we discuss how first year design students have been encouraged to question responsible AI innovation to ensure good practice whilst working with the public, merging traditional processes such as sketching, collage and storytelling with AI tools to aid creative thinking.
Long abstract:
Recent advances in machine learning, specifically “generative AI Art”, have produced a $48 billion industry, creating video, voice, text, and animation outputs. These image generators are reshaping design practices as designers leverage the technology to reimagine innovative spaces.
Despite capturing the public's imagination with products like Midjourney and DALL-E 2, many creatives have spoken about the challenges they have experienced due to the proliferation of large-scale image generators trained on image/text pairs from the internet. For instance, practitioners report their copyrighted work amongst the training data without consent or attribution, leading to artwork forgery and feedback loop biases (Kenthapadi et al 2023).IImage generators flood the internet with ‘acceptable’ imagery, supplanting the demand for creatives in practice and raising questions as to whether generative images fully express human creativity. However, despite the many challenges with generative AI technology in 2023, Zaha Hadid Architects embraced AI for early ideation, generating and testing numerous design concepts.
Whilst it is natural to be apprehensive towards AI, we need to address the developing landscape of digital creativity by challenging our understanding of creative processes and ensuring a relevant and responsible approach as creative industries adapt to these new tools and processes. Despite the many problems with generative AI, these tools are here to stay; surely, their goal should be to mitigate bias and enhance, not replace, human creativity.
As part of this combined format submission, we will facilitate a 45 minute workshop, exploring the future of spaces & places through the lens of AI.
Short abstract:
The talk focuses on how creative artists raise public awareness about AI while using AI tools that might compromise their works’ liberatory potential.
Long abstract:
This presentation would look at how creative artists such as Zach Blas, Rafael Lozano-Hemmer, Trevor Paglen, Hito Steyerl, and Lynn Hershman Leeson critique the narratives of innovation associated with artificial intelligence. Each of these artists has created installations designed for public engagement that emphasize the limitations of AI and its risks to personal liberty as a means for promoting heteronormativity, ignoring state violence, normalizing militarism, exploiting labor, and shoring up the patriarchy. This richly illustrated talk uses the concepts of boundary objects and boundary work to examine the limitations of interactive digital art as a means for influencing public opinion and policy, particularly if the audience is compelled to participate in their own datafication.