Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Only practical knowledge or knowing the algorithm? Meanings and necessities of eXplainable Artificial Intelligence in care of older adults  
Karin van Leersum (Open Universiteit) Alexander Peine (Open University of The Netherlands)

Send message to Authors

Short abstract:

AI has the potential to support older adults to remain living independently. Explainable AI (XAI) could improve the understanding of AI-based decisions. Although XAI seems simple, it is more difficult in care practices, and the meaning and needs for explainability differ among stakeholders.

Long abstract:

The number of older adults living independently at home is expanding, which is often said to bring the need for more technological assistance. Dutch policy aims to allow older adults to remain living at home as long as possible. In such policies, the use of technologies support older adults to perform daily practices. Artificial Intelligence (AI), as part of these technologies, has the potential to improve personalized care and ageing in place. The internal machineries of AI systems often remain hidden as a black-box. Interest in eXplainable AI (XAI) originate from this black-boxing. XAI should assist users in understanding the underlying logic of the decision-making process, and in identifying mistakes. It is unknown how various stakeholders understand AI, and what value do they see in XAI.

We conducted 21 scenario-based interviews to investigated XAI in care. We aimed to understand ‘what is XAI’ in the worlds of different stakeholders and the different enactments of XAI that become visible in their practices. Preliminary findings show that XAI sounds simple, but seems more difficult in practice. Stakeholders express different meanings and necessities of XAI. This varies from knowledge of algorithms or data specific knowledge towards practical understanding. In care of older adults, trust and willingness to use AI are essential. The needed level of explainability differs according to different stakeholders. As a follow-up, we recommend research into the enactment of XAI in practice, and the form or degree of XAI needed and for whom.

Traditional Open Panel P360
Sociotechnical dimensions of explainable and transparent artificial intelligence
  Session 1 Wednesday 17 July, 2024, -