Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Contribution:

Near(by)ness: what was (and what might be) the large language model?  
Sarah Ciston

Short abstract:

How did words become vectors—what historical, technical gestures enabled and accelerated the growing reverence for probabilistic language and spatialized knowledge that we see in LLMs today, and how can this be undermined through interdisciplinary, intersectional, material reconsiderations of LLMs?

Long abstract:

How do language models represent and reproduce difference, as a fundamental aspect of their operations and imaginaries? How might they ‘know’ difference differently? This contribution examines two contrasting socio-technical imaginaries. One does not yet exist, but is prefigured by and latent within the other. The other currently prevails in the explosion of large language models (LLMs), but its roots grow from 20th century aerial weapons development, eugenicist statistics, cryptography, behavioral psychology, phonology, linguistics, and cybernetic research centered in the US, UK, Germany, and Russia.

This imaginary matters because the decisions that produced LLMs have also determined what language models understand about difference. These key reductive moves—reducing similarity to equivalence, reducing proximity to similarity—take place as small technical gestures one might barely call decisions. Through basic but compounded operations, they still determine how LLMs inscribe difference into their outputs and onto bodies across the world, from those using their interfaces to those laboring to moderate their content.

If the language model’s proximity or ‘nearness’ has been the foundation for assumption-making, let us build computational ‘nearbyness’ that resists this reduction of complexity and entanglement, that brings us close without collapsing distinctions. I take up Trinh T Minh Ha’s ‘speaking nearby’ to prefigure a contrasting socio-technical imaginary: ‘Nearbyness’ replaces knowing-as-classifying or -conquering with understanding through curiosity, commitment, and relation. Practically, this means returning to the material practices of LLMs with new protocols, reclaimed histories, and intersectional methodologies—to move from foundation models built on classificatory logics toward more transformative models that might unravel them.

Combined Format Open Panel P115
Global socio-technical imaginaries of AI
  Session 3 Tuesday 16 July, 2024, -