Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Mind over matter: the proliferation of large language models (LLMs) in therapeutic settings  
Zoha Khawaja (Simon Fraser University) Jean-Christophe Belisle-Pipon (Simon Fraser University) Rouleau Geneviève (Université du Québec en Outaouais)

Send message to Authors

Short abstract:

As the use of LLMs enters therapeutic spaces how can we foster and protect patient autonomy and mental health conditions? To what extent can therapeutic functions be performed by LLMs to provide scalable therapeutic care without replacing and undermining the need for human agency?

Long abstract:

With the rise in mental health needs and insufficient resources to curtail them, artificial intelligence (AI) technologies arise as an optimistic techno-solution to bridge this gap. Despite Large Language Models (LLMs) such as ChatGPT being criticized for its lack of accuracy and analysis, it has been able to provide scalable remote services to users worldwide. The co-founder of ‘koko’, an online platform that provides peer-to-peer mental health support, believed that if the platform utilized LLMs like ChatGPT-3, it could increase peer support that would benefit users seeking help. However, this decision was met with public concerns on how such an integration would affect users’ help-seeking behaviours and ability to foster their agency when presented with therapeutic support from an AI instead of their peers.

These concerns are echoed due to the emergence of LLMs in therapeutic care which comes at the expense of having physical spaces change to digital ones, providing more automated care, and shifting decision-making tasks that once solely rested on humans to AI. In psychological spaces that require building therapeutic rapport, trust and relational autonomy reliant on empathy, can the proliferation of LLMs be deployed safely without the deterioration of patient well-being? Although empathy is required for most therapeutic tasks, some skills-based approaches can be allocated to an AI in which scalability provided by LLMs could be beneficial. This talk will explore and examine the therapeutic interventions that utilize LLMs and the precarious balance that must be struck between the scalability of care and protecting patient autonomy.

Traditional Open Panel P282
Safe spaces of autonomy
  Session 1 Thursday 18 July, 2024, -