Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

LLMs as conversational agents in the interview society  
Spencer Kaplan (Yale University)

Send message to Author

Short abstract:

This paper studies the language ideologies of AI research through ethnographic fieldwork in San Francisco. Here, LLM-based conversational agents invoke the ideologies of the interview society. These ideologies inform LLMs’ perceived capacities and authority yet pose certain hazards for their use.

Long abstract:

This paper examines the language ideologies motivating generative AI development in the San Francisco Bay Area. To do so, it offers two case studies from ongoing ethnographic fieldwork among AI researchers in the region. The first case comes from the subfield of AI Safety, which seeks to “align” AI models with so-called human values. Here, researchers employ LLMs as “conversational agents” that supposedly discern human interlocutors’ underlying values through deliberative interaction. The second case comes from researchers’ use of LLMs in their personal lives. It describes efforts to fine-tune models like OpenAI’s ChatGPT with transcripts from discussions about topics like AI’s societal implications, creating purported conversational experts in the topics.

Both cases exemplify the use of LLMs as technologies that collect conversational data for interpretation through abduction. In such an application, LLMs employ the interactional and epistemic techniques of the “interview society” as described by Atkinson, Silverman, and, later, Briggs. Here, interviews offer privileged and authoritative access to knowledge that is otherwise hidden—especially knowledge about persons. To do so, interviews invoke Liberal and Romantic language ideologies about public reason, inner expression, and authenticity. In conversational agents, these ideologies now inform the perceived capacities and authority of LLMs. Yet interviews are always partial and positioned, posing hazards for LLM applications. These hazards are already faced by social researchers engaging in interview methodologies. By approaching interviews as an interactional form common to LLMs and social researchers alike, this paper also raises important reflexive questions for scholars of AI.

Traditional Open Panel P296
LLMs and the language sciences: material, semiotic, and linguistic perspectives from STS and linguistic anthropology
  Session 1 Friday 19 July, 2024, -