Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
Accepted Paper:
Short abstract:
This research engages with (im)possible 'knowledge bodies' in large language models (LLMs). It rethinks voices and narratives in AI through and with LLMs in experimental conversations. This post-qualitative method offers new insights and resistance to grand narratives through feminist approaches.
Long abstract:
This research explores the formation of 'knowledge bodies' that are engendered in/with large language models (LLMs) currently available on the Internet. LLMs are analysed as approximations of AI, following the industry's optimistic grand narratives and occasional controversies, such as the fired Google researcher Blake Lemoine's claims about LaMDA's sentience. Extending the concerns for problematic sourcing of training data, the exclusion of specific experiences, and the concentration of machine learning innovation in the Silicon Valley, the suggestion here is that the non-, weird- or (im)possible embodiment of datasets and language models has important implications for STS research on AI, including flat ontological perspectives on bodies and data, as well as the possibility for resistance through feminist and decolonial approaches. Combining cultural studies of data with data science techniques and performative experimentation with Llama, this research will document experiments in conversations with generative chatbots as an innovative post-qualitative method. These will pick up on feminist concerns for (im)possible bodies (Rocha and Snelting, 2022), imitation (Kind, 2022), bodies of water (Neimanis, 2017) bodies of work, knowing bodies and so on. It is an invitation to think about (im)possible embodiment as tactics for refusing and complicating the binary choice between technocratic and technophobic narratives around data.
(Re)Making AI through STS
Session 1 Wednesday 17 July, 2024, -