Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

P256


What is limiting artificial intelligence? STS perspectives on AI boundaries. 
Convenors:
Marc Böhlen (University at Buffalo)
Andrew Lison (University at Buffalo, SUNY)
Send message to Convenors
Format:
Traditional Open Panel

Short Abstract:

This panel seeks to host conversations specifically around the question of contemporary limits to AI. These limits can be imposed or inherent—that is, enforced by human beings by choice or given by the material resources required to produce AI in the first place—or both.

Long Abstract:

Recent discussions of artificial intelligence have largely been driven by a sense of inevitable transformation. Our panel seeks instead to stimulate conversations around the question of contemporary limits to AI.

Limits can be understood in two contradictory senses. The first of these is the notion that external constraints will need to be imposed on a technology that is developing faster than social mores can adjust. A second is becoming apparent in the energy and water resources required to produce state-of-the-art language models; such environmental constraints gesture toward limits of producibility. AI, then, may potentially either need to be held in check (as in the accelerationist case of an out-of-control artificial general intelligence) or present unacknowledged shortcomings of its own (revealed though critique).

Our panel seeks contributions discussing how these limits might manifest themselves in specific situations, and how they might be addressed in practical countermeasures. We encourage submissions resisting industry narratives that marginalize social concerns in favor of unrealistic and/or unethical visions of unlimited technical performance, blind optimization. Similarly, we welcome presentations addressing conflicts between AI development and natural resource availability. Downstream, ecological restrictions are also global limits, as excessive power and cooling costs must be passed on to users while their externalities risk environmental harm. Moreover, if only large corporations and state actors end up possessing the means to support advanced AI production given the energy and mineral costs associated with large-scale computation, what could this mean for the future of technology and democracy?

STS is ideally suited to address this topic as its varied approaches can foster connections across the boundaries, both social and material, confronting AI. In doing so, it can inform more pragmatic approaches to the potential transformations promised by AI’s uncritical proponents.

Accepted papers:

Session 1
Session 2