Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
Accepted Paper:
Paper short abstract:
The idea of artificial intelligence requires that human imagination and sociality be abstracted out of prediction and decision-making processes undertaken by machines. This paper explains that such abstraction has a de-humanising effect, generating novel risks of harmful governance practices.
Paper long abstract:
In order to accept the proposition that intelligence can be 'artificial' or non-human, we need to resolve a consilient model of intelligence that is functional for both computer science and social science. Currently, in a computer science context, 'intelligence' describes the efficacy of automated prediction and decision-making processes. 'Intelligent' predictions and decisions are those that yield a desirable outcome. In a social scientific context, such processes are also considered imaginative and social. Imaginatively, we draw on culturally specific systems of ideas when making predictions about the possible effects of our actions. Socially, we learn our ideas from one another, we use those ideas to make predictions and decisions about our relationships with one another, and we use our real-world experience of those relationships to modify our ideas.
In a computer science context, it may seem intuitively plausible to exclude human imagination and sociality from prediction and decision-making processes. Once a system of ideas is embedded in code, that code will run independently, according to its pre-programmed logic and available data sources. However, as with all human technology, the observable effects of this de-humanisation are not the removal of human imagination or sociality, but rather their abstraction. This paper describes the process of feedback between increasingly abstract ideas about artificial intelligence, and the accelerating instantiation of those ideas in real-world, AI-mediated interactions between people. The paper then explains that such accelerating abstraction also has a de-humanising effect, generating novel risks of harmful governance spanning healthcare, education, economics, justice and environmental management.
AI as a Form of Governance: Imagination, Practice and Pushback
Session 1 Wednesday 8 June, 2022, -