Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Tara Mahfoud
(University of Essex)
Christine Aicardi (King's College London)
Send message to Convenors
- Format:
- Traditional Open Panel
- Location:
- NU-5A47
- Sessions:
- Tuesday 16 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
In this panel, we seek to address two questions: 1) What can previous iterations of entangling mind and machine tell us about contemporary AI, neuroscience and neurotechnology? 2) How are current entanglements different, and what do they mean for future AI, neuroscience and neurotechnology?
Long Abstract:
The history of neuroscience is one intertwined with the history of building computational systems and machinic brains/minds – from Alan Turing’s thinking machines, to cybernetic brains, and more recently, the use of experimental and theoretical neuroscience in the development of Google DeepMind’s Artificial Intelligence (AI) techniques. Brains and minds have been, and continue to be, a source of inspiration for AI. Artificial neural networks at the core of recent AI developments are inspired by biological neural networks in animal brains, and new brain-inspired neuromorphic hardware architectures are designed to support them. Biological realism, however, is not the primary concern of AI. It takes inspiration from biology only in order to build more efficient AI tools - improving information processing performance for a vast range of activities, and lowering energy consumption. In this panel, we seek to address two overarching questions: 1) What can previous iterations of entangling mind and machine tell us about contemporary AI, neuroscience and neurotechnology? 2) How are current entanglements between mind and machines different, and what do they mean for future AI, neuroscience and neurotechnology? We invite contributions that will help tackle these topics from various perspectives and various locations, taking diachronic as well as synchronic approaches. Possible questions are: What are the epistemic, political, social and ethical consequences of these entanglements? What epistemic communities and institutions (military, corporate, etc.) are established around these different practices and around different conceptions of intelligence – human, animal, more-than-human? What kinds and what aspects of living organisms are brought up in simulation, mimicry or machinic reproduction of ‘intelligence’? How are they pared down, distorted – or rendered invisible?
Accepted papers:
Session 1 Tuesday 16 July, 2024, -Paper short abstract:
This paper explores three cases where participants in first-in-human trials of neural devices ("brain pioneers") 'repurposed' the neurotechnologies implanted in their brains to fulfill their own interests or aims. These include new ways to be creative, play video games, and challenge stigma.
Paper long abstract:
Research participants in long-term, first-in-human trials of implantable neural devices (“brain pioneers”) are critical to the success of the emerging field of neurotechnology (e.g., Neuralink, brain-computer interfaces (BCIs), deep brain stimulation (DBS) for treatment-resistant depression, etc.). While there is much hype surrounding these novel technologies and their potential to fundamentally alter human consciousness and communication, some brain pioneers take a more nuanced approach to articulating the “cool” new things they can do with these devices. From 2023 to 2024, we conducted eleven open-ended interviews: six with brain pioneers (four BCI users and two DBS users) and five with their care partners. Drawing on qualitative data obtained from these interviews, this paper explores some of the phenomenological descriptions made by participants about the neural technologies implanted in their brains. In particular, we consider three cases where users ‘repurposed’ the device to fit their own interests or aims. First, the use of a BCI to create artwork in photoshop (controlling the cursor with their brain), awakening a new avenue of creativity. Second, the ‘gamification’ of a BCI, using it to play video games directly with the brain to make the games more challenging than with conventional assistive controllers. Finally, and quite different from the BCI use-cases, the use of DBS as a neurotechnology of the self to challenge stigma surrounding treatment-resistant depression amongst one’s family and friends. We propose that these findings may offer glimpses of the new entanglements of mind, machine, and brain in the emerging field of neurotechnology.
Paper short abstract:
What are the legal implications of granting personhood rights to artificial agents (AI robots) to enable liability claims, while also granting neuro-rights to humans to protect them against neurotechnological altering of their brain? Advances in Brain-Computer Interfaces seem to dissolve the binary.
Paper long abstract:
In May 2017 the European Parliament’s Committee on Legal Affairs submitted recommendations to the EU Commission on Civil Law Rules and Robotics requesting them to develop new legislation addressing the challenges of “intelligent machines” described as “androids with human features”. The Motion recommends “a specific legal status for robots” with the aim of “applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently”. In December 2022, the EU published the Neurotechnologies and Human Rights Framework. Do we need New Rights? in which the need to extend human rights with ‘neuro-rights’ is proposed. Recent innovations in brain-implants show not only the medical and therapeutic prospects of reactivating lost functions, but also the possibility to intervene in our notions of self-awareness, memory, and self-control; the de- and re-coding of neural pathways is no longer science fiction but being tested in labs around the world. Corporations, from OpenAI, Neuralink to Microsoft and Google invest billions in Brain-Computer Interfaces (BCI); military applications have implicated them in the military industrial complex.
This paper examines how socio-political and legal narratives about “rights for AI robots” and “rights for human brains” reconfigure the fundamental legal binaries of persons and things, and artificial (AI) and natural (human) agency. If hybridization of biological and computational “brains” leads to “cognitive assemblages” (Hayles 2016), does the split EU proposal—one offering robots personhood making them liable for causing harm, and one offering neuro-rights to humans to protect them from computational neurotechnological harm—need to be unified?
Paper short abstract:
This paper presents the results of a sociotechological investigation into the publically accessible online archives of the US Defense Advanced Research Projects Agency. It analyzes and categorizes the projects funded since 2010 and places them in the context of military doctrinal discourses.
Paper long abstract:
This paper presents the results of a comprehensive sociotechnological investigation into the publically accessible online archives of the Defense Advanced Research Projects Agency (DARPA), the research funding arm of the US military. It analyzes and categorizes the projects funded since 2010 and places them in the context of military doctrinal discourses. It argues that, despite the variety of projects and presentation of many of them as ethical and humanitarian, there are underlying ambitions in inlfuencing and controlling that brains of US soldiers, enemy combatants and civilians, consistent with longstanding tactical aims in US military doctrine. These ambitions link DARPA neuroscience research with projects vastly different in scale and scope but which all share an interest in creating what Paul Edwards described in the context of Cold War US military strategy as "a closed world," but which has more recently been termed "Full Spectrum Dominance" (FSD). FSD constructs a picture of the world as maleable, surveillance and controllable for US national security purposes at scales from the orbital to the molecular. Academia is deeply implicated in this project and research groups not just in American universities but across Five Eyes alliance nations and beyond have accepted DARPA funding to pursue projects that while technologically possible cannot be regarded as socially desirable, extended the possibility of "neurosurveillance" and control into the brain itself.
Paper short abstract:
This paper takes up Geoffrey Hinton's provocation that the future of large-scale AI is in "mortal" neuromorphic computers. By historicizing present fantasies of AGI with Carver Mead's 1980s research in neuromorphic computing, I offer a different history of the contemporary turn to brain-inspired AI.
Paper long abstract:
If the dream of posthumanism was immortality, the emerging computational paradigm is one of death. General purpose digital computers for large-scale AI are reaching their limit point in favor of new architectures like analog neuromorphic computation. Last year Geoffrey Hinton, one of leading figures of deep learning, argued that the future of AI was in its mortality. Hinton was referring to new research in neuromorphic computation in which hardware and software are one and the same. Data is stored as charges in memristors and floating gates rather than as code that can easily be transferred, such that when the computer “dies,” the data does too.
I take up Hinton’s provocation of neuromorphic “mortal computation” in which software and hardware are inseparable and designed together to chart a shift in the ontology of the chip centered on a rejection of the hylomorphic schema. I argue that this also constitutes a turn from software’s promise of immortal data, in the enduring ephemeral of the digital computer, to a mortal computation founded on temporary permanence. I do so by examining both Silicon Valley’s present fantasies of AGI and by turning to the late 1980s, when Carver Mead first began to develop neuromorphic computation at Caltech at the same time and in the same journals as deep learning and back propagation. By historicizing Hinton’s claims about the future of computing, I offer a different history of what I term AI’s mortal materialism and underscore neuromorphic computation’s beginnings in analog machine vision.
Paper short abstract:
This paper moves across three axes to examine 1. the ontological and epistemological stances re-made at the juncture between robotics and neuroscience;2. the power of, and need for, modelling in sustaining such project; 3. how such alliances are made successful by obscuring critical aspects of life.
Paper long abstract:
This paper draws from fieldwork in a robotics laboratory aligned with recent neuroscientific developments – the Free Energy Principle. FEP has gained increasing traction, growing from modest hypothesis to elucidate visual processing phenomena, to ‘theory of everything’ aiming to explain life.
I first elucidate FEP's intricacies, highlighting its connections to cybernetics. I suggest that the cybernetic project brought forward by FEP is radically different from the optimistic picture highlighted by Pickering (2010), rather configuring information as a statistically knowable object.
Secondly, I argue the principle’s explanatory power resides in its formalism, generating models which reduce life to causative processes. This formalism can be leveraged by roboticists, who can replicate them algorithmically, enhancing their ontological legitimacy. Despite this pragmatic alliance, I describe an obfuscated clash: the brain-machine entanglement is partial, since engineers look to models as tools to increase efficacy, rather than ontological proofs. Such different interests however often become blurry, as evidenced by an ethnographic vignette of a failed experiment. The objectivistic allure of modelling led roboticists into attributing failure not to a fundamental onto-epistemological fallacy, but rather to limitations of their implementation.
I thus highlight an inability on both sides to recognise modelling practices as producing vital models (Mahfoud et al., 2017). Instead, they push for a seductive, but dangerous, vision of life as it is produced by such modelling practices. By drawing on Simondon (2020) and Kauffman (2019), I conclude by outlining what it is that gets downplayed through such mind-machine entanglements mediated and made possible by modelling.
Paper short abstract:
Starting from the observation that biological realism is sidelined in the training of future AI specialists, I examine the ambiguous status of "Artificial Neural Networks" in the teaching, learning, description and promotion of ordinary Machine Learning Algorithms.
Paper long abstract:
Drawing on a study of the development of French education in artificial intelligence, based on ethnographic observations of three university courses, I examine the way in which political, economic and industrial considerations are helping to rebuild the AI project, and consequently the place allocated to biological realism in the ordinary practice of teaching, learning, developing, perfecting and implementing contemporary AI algorithms.
I will show that biological realism and the heuristic ambition to learn more about the mind from AI experiments, despite the existence of ambitious but rare research programs in this field, are being sidelined in the training of future AI specialists in favour of an engineering conception of AI, centered on the search for instrumental efficiency.
As a result, even though ordinary Machine Learning and Deep Learning algorithms, which currently dominate the AI scene, can indeed be defined as a repertoire of mathematical and computational techniques historically inspired by the workings of the biological brain, I will argue that the neurobiological metaphor of "Artificial Neural Networks" through which these algorithms are currently described serves essentially didactic and promotional purposes, in a context where biological realism of algorithms is of secondary importance, if at all. Indeed, if current ANNs can be considered as "views of the mind", it is perhaps first of all in the sense of what Latour calls in French "vues de l'esprit" (1985), i.e. as a system of inscriptions through which current Machine Learning algorithms are described, taught, learned and promoted.