Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
Accepted Paper:
Paper short abstract:
We use tools and insights from ethnomethodology and conversation analysis to document and understand how people come to treat LLM-based interactive interfaces as ‘knowledgeable’ or even ‘intelligent’, and how they iteratively refine prompts to coax text generators towards desired responses.
Paper long abstract:
The unprecedented spread of large language models provides us with what is possibly the greatest natural experiment in human sense-making since the sociological breaching experiments of Garfinkel. Garfinkel studied how people in interaction respond to things like preset phrases presented according to a randomized metric; responses specifically designed to conceal a lack of understanding; and statements that blatantly contradict the evidence before their own eyes. He found that people are willing to go to great lengths to provide a commonsense interpretation of the talk they were exposed to. This work revealed that people bring practical methods for sense-making to just about any interactively presented material. Against this background, it is unsurprising that people have been quite impressed by large language models that generate statistically plausible continuations, fine-tuned to conform to human ratings of ‘helpfulness’ and ‘authoritativeness’.
Here we plan to bring the analytical tools of ethnomethodology and conversation analysis to bear on the study of how people make sense of interactive interfaces, particularly those of text-based language models. We present early results of an observational, qualitative, sequential analysis of records of human-LLM interactions. This represents a fresh take in an area where automated metrics and large-scale quantitative analyses reign supreme. We aim to document how people come to treat text-based interfaces as ‘knowledgeable’ or even ‘intelligent’, and how they iteratively refine prompts to coax text generators towards desired responses.
LLMs and the language sciences: material, semiotic, and linguistic perspectives from STS and linguistic anthropology
Session 2 Friday 19 July, 2024, -