Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Contribution:

What does your voice reveal about you?: Exploring the ethical demands for the design and usage of voice-based chatbots in digital mental healthcare  
Scott Lear Rouleau Geneviève (Université du Québec en Outaouais) Nathanael Siaoman (Simon Fraser University) Zoha Khawaja (Simon Fraser University) Jean-Christophe Belisle-Pipon (Simon Fraser University) Hortense Gallois (Simon Fraser University)

Send message to Authors

Short abstract:

Voice-based chatbots have the potential to make great strides in performing clinical tasks, but without proper oversight, ethical, legal, and social implications may prevail in their deployment. We ask experts to identify the ethical demands required for the design and usage of such a technology.

Long abstract:

Using voice as a biomarker for clinical support has been anticipated to provide an easy, cost-effective, and non-invasive means of collecting health data, and when paired with artificial intelligence (AI), may significantly increase accuracy in diagnosis, predicting, and monitoring interventions. Vocal biomarkers, coupled with the 24/7 availability, individualized support, and remote application provided by AI-powered chatbots, voice-based chatbots have the potential to improve therapeutic care. However, the design and usage of voice-based chatbots for mental health can bring about a myriad of ethical, legal, and social implications (ESLIs), such as providing inadequate support and guidance due to bias in the design, lack of data protection and privacy regulations, and overreliance on the technology itself. The need for ethical guidelines for oversight is imperative, but how can one foster a responsible governance framework for such voice-based chatbots?

We propose an anticipatory ethics approach using a Delphi process to identify the ELSIs in the advent of such a technology. A panel of experts from varying disciplines along with community representatives will be tasked to ‘voice’ their opinions on the values they believe should be central to such a technology’s design and usage. The results will help explore what constitutes trustworthy voice-based chatbots and to what extent can therapeutic tasks be replaced by AI. By involving key stakeholders in the early stages of supporting ethical guidelines for voice-based mental health chatbots, we are optimistic this will help avoid ELSIs in the design and development of these technologies downstream when deployed in clinical settings.

Combined Format Open Panel P066
Envisioning ethics – what does it mean to integrate ethical reflection into the early phases of technology development?
  Session 1 Tuesday 16 July, 2024, -