Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

The garden of unearthly LLM delights: have we been tricked down the path?  
Etienne Grenier (Institut national de la recherche scientifique)

Send message to Author

Short abstract:

The Stochastic Parrot controversy highlighted the emergence of flawed hermeneutical machines fostering the automation of interpretation. Interviews conducted with technoscientific field specialists reveal the fractures left in the wake of the deployment of LLMs outside their labs.

Long abstract:

Computer scientists Bender and Gebru called out Large Langue Models (LLMs) as stochastic parrots. Pulverizing records in state-of-the-art language comprehension benchmarks, LLMs have cracked human syntax while lacking a deep semantic understanding of their constructs. The emergence of this class of hermeneutic machines built on statistical inference (Roberge & Lebrun, 2017) announces the automation of interpretation. Meaning is now disconnected from its formal manifestation, as training data is only concerned with the latter. Now confronted with the growing acceptance of invalid statistical relationships in the name of practicality and the neglect of a deep theoretical understanding of the problems it pretends to solve (Jones, 2004), the AI research community is faced with a sensemaking crisis. If “we have been led down the garden path (Bender, Gebru et al., 2021),” where are we standing now and how can sense be made in this strange landscape?

Through a series of interviews conducted with computer science experts distributed across Canada, the Shaping AI initiative researchers collected data that could offer potential answers. Following the foundational ethnographic work accomplished in the AI research communities (Forsythe, 1993; Hoffman, 2017), we caught a glimpse of how these experts reflect upon their research practices. Our preliminary results suggest the existence within the scientific community of what could be construed as an hermeneutic malaise fuelled by the neglected issue of sensemaking in AI systems. We argue that sensemaking is the core element that must be addressed to avoid the further development of this malaise into a full-blown crisis.

Traditional Open Panel P228
Rebooting the STS programme for AI: emerging controversies and methods for studying 21st-century artificial intelligence
  Session 1 Tuesday 16 July, 2024, -