to star items.

Accepted Paper

Making Sense through Disagreement: How Users Question Problematic LLM Outputs  
Ole Pütz (Bielefeld University)

Paper short abstract

This paper claims that users attend to problematic output in chat with LLMs with methods that they also employ in conversations with other humans, but these methods get subtly transformed in ways that helps us to understand what we are faced with in interactions with AI.

Paper long abstract

We know since ELIZA that human users will attempt to make sense of the output of a chatbot as a response to their own contributions to the chat irrespective of the technological foundation of such a system (Weizenbaum 1967; Eisenmann et al. 2023). While ELIZA was most successful when users imagined a psychiatric context and rely on a limited repertoire of conversational actions, large language models (LLMs) are much more flexible in the roles they can be instructed to play (Shanahan et al. 2023) and the conversational actions that they perform (Stokoe et al. 2025), but we can still observe how users find sense in generated outputs and disregard inappropriate output (Pütz 2025). At the same time, there are occasions where users attend to problematic model outputs. To discover such interactions, I use a large dataset of 2 million chat interactions that is available through the WildVis search tool (Deng et al. 2024) and collect instances where users question or disagree with model outputs and consider the sequential context of these occasions. These instances are compared to and contrasted with findings from conversation analysis concerning disagreements in interactions among humans. I will suggest that users attend to relevant problematic output in chat with LLMs with the methods that they employ in conversations with other humans, but these methods get subtly transformed in ways that help us to understand what sort of human-machine interaction we are faced with.

Traditional Open Panel P136
Outlasting 'disruption': Empirical perspectives on practical reasoning with AI
  Session 2