Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Tara Mahfoud
(University of Essex)
Christine Aicardi (King's College London)
Send message to Convenors
- Format:
- Traditional Open Panel
Short Abstract:
In this panel, we seek to address two questions: 1) What can previous iterations of entangling mind and machine tell us about contemporary AI, neuroscience and neurotechnology? 2) How are current entanglements different, and what do they mean for future AI, neuroscience and neurotechnology?
Long Abstract:
The history of neuroscience is one intertwined with the history of building computational systems and machinic brains/minds – from Alan Turing’s thinking machines, to cybernetic brains, and more recently, the use of experimental and theoretical neuroscience in the development of Google DeepMind’s Artificial Intelligence (AI) techniques. Brains and minds have been, and continue to be, a source of inspiration for AI. Artificial neural networks at the core of recent AI developments are inspired by biological neural networks in animal brains, and new brain-inspired neuromorphic hardware architectures are designed to support them. Biological realism, however, is not the primary concern of AI. It takes inspiration from biology only in order to build more efficient AI tools - improving information processing performance for a vast range of activities, and lowering energy consumption. In this panel, we seek to address two overarching questions: 1) What can previous iterations of entangling mind and machine tell us about contemporary AI, neuroscience and neurotechnology? 2) How are current entanglements between mind and machines different, and what do they mean for future AI, neuroscience and neurotechnology? We invite contributions that will help tackle these topics from various perspectives and various locations, taking diachronic as well as synchronic approaches. Possible questions are: What are the epistemic, political, social and ethical consequences of these entanglements? What epistemic communities and institutions (military, corporate, etc.) are established around these different practices and around different conceptions of intelligence – human, animal, more-than-human? What kinds and what aspects of living organisms are brought up in simulation, mimicry or machinic reproduction of ‘intelligence’? How are they pared down, distorted – or rendered invisible?
Accepted papers:
Session 1Andrew Brown (University of Washington) Eran Klein (Oregon Health and Science University) Sara Goering (University of Washington)
Long abstract:
Research participants in long-term, first-in-human trials of implantable neural devices (“brain pioneers”) are critical to the success of the emerging field of neurotechnology (e.g., Neuralink, brain-computer interfaces (BCIs), deep brain stimulation (DBS) for treatment-resistant depression, etc.). While there is much hype surrounding these novel technologies and their potential to fundamentally alter human consciousness and communication, some brain pioneers take a more nuanced approach to articulating the “cool” new things they can do with these devices. From 2023 to 2024, we conducted eleven open-ended interviews: six with brain pioneers (four BCI users and two DBS users) and five with their care partners. Drawing on qualitative data obtained from these interviews, this paper explores some of the phenomenological descriptions made by participants about the neural technologies implanted in their brains. In particular, we consider three cases where users ‘repurposed’ the device to fit their own interests or aims. First, the use of a BCI to create artwork in photoshop (controlling the cursor with their brain), awakening a new avenue of creativity. Second, the ‘gamification’ of a BCI, using it to play video games directly with the brain to make the games more challenging than with conventional assistive controllers. Finally, and quite different from the BCI use-cases, the use of DBS as a neurotechnology of the self to challenge stigma surrounding treatment-resistant depression amongst one’s family and friends. We propose that these findings may offer glimpses of the new entanglements of mind, machine, and brain in the emerging field of neurotechnology.
Daniele Cavalli (École Normale Supérieure de Paris - PSL Research University)
Short abstract:
This contribution aims to discuss the consequences of a non-anthropocentric reading of cognition in the time of brain-inspired AI and its implications for human autonomy. This will be accomplished through a three-step critical exercise that seeks to reconceptualize human-machine entanglement.
Long abstract:
A renewed convergence of computer engineering and neuroscience is unfolding, facilitating the integration of bio-inspired principles into both software and hardware. Unlike previous ‘’symbolic’’ architectures, 1) brain-inspired AI systems capitalize on multi-layered and open-ended computation, making them more adaptable to nowadays informational complexity. Moreover, 2) the social space and the environment are being transformed as a function of technical objects that rely on these systems, to enable them to better operate. This contribution will first explain how these two conditions are crucial to understanding the increased agential capacity of these technologies.
These changes necessitate viewing AI as material force embodied into the decision-making system. It will then be discussed the limits of an anthropocentric idea of cognition and intelligence, which involves 1) reading AI as a form of «infrastructural cognition», understood as a network of planetary computation and 2) imaging the social space as «cognitive ecosystem», intended as an assemblage of data flows, non-human forms of intelligence, institutional and intellectual structures, and connected technologies. This entails a onto-epistemological reframe of the relationship between human and technology: no longer simple mediation and interaction but radical entanglement.
Finally, moving to the normative level, this contribution will ask: how should it be remodeled a category as central as that of human autonomy, which always subsume a form of cognitive independence? In this third step, it will draw on new materialist approaches, especially in their interpretation of non-human agency and relational assemblages – also trying to elucidate the conceivable constraints of this non-essentialist interpretation.
Stephen Rainey (Delft University of Technology)
Long abstract:
The prospect of AI-enabled mind-reading is one beginning to grasp the imagination of the public at large. Brain-Computer Interfaces (BCIs) are already used to control software and hardware based on brain data. Because this data can be correlated with identifiable mental states, some think BCI data could be further decoded to produce mind-reading applications. Striking cases already exist of ‘dream decoding’ and inner speech reproduction based in brain data decoding. The prospects of mind reading machines is boosted further– perhaps especially – through evolutions in generative AI. Recent research claims that generative AI models, like chatGPT, are able to produce coherent verbal outputs matching experimental participants’ perceptions and thoughts. Careful scrutiny is required on such prima facie evidence suggestive of entangled AI and neurotechnology as mind reading technology.
If mind reading really were technically possible, there could be far-reaching and disruptive consequences for ourselves and the systems, societies, cultures, and practices in which we live. For example, could mind reading machines be used in court cases? Could machines diagnose mental illnesses? Could they lead to new kinds of shared consciousness? Would big tech companies own the brain data produced in mind reading or would it remain personal? Already, there are those who seek novel human rights in the face of burgeoning neurotechnological capacities. These capacities are thought to threaten mental privacy, the integrity of thought and mental freedom. This paper explores conceptual puzzles, practical questions, and potential cross-sectoral consequences by asking: What are the prospects for AI-enabled mind reading machines?
Marc De Leeuw (University of New South Wales)
Long abstract:
In May 2017 the European Parliament’s Committee on Legal Affairs submitted recommendations to the EU Commission on Civil Law Rules and Robotics requesting them to develop new legislation addressing the challenges of “intelligent machines” described as “androids with human features”. The Motion recommends “a specific legal status for robots” with the aim of “applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently”. In December 2022, the EU published the Neurotechnologies and Human Rights Framework. Do we need New Rights? in which the need to extend human rights with ‘neuro-rights’ is proposed. Recent innovations in brain-implants show not only the medical and therapeutic prospects of reactivating lost functions, but also the possibility to intervene in our notions of self-awareness, memory, and self-control; the de- and re-coding of neural pathways is no longer science fiction but being tested in labs around the world. Corporations, from OpenAI, Neuralink to Microsoft and Google invest billions in Brain-Computer Interfaces (BCI); military applications have implicated them in the military industrial complex.
This paper examines how socio-political and legal narratives about “rights for AI robots” and “rights for human brains” reconfigure the fundamental legal binaries of persons and things, and artificial (AI) and natural (human) agency. If hybridization of biological and computational “brains” leads to “cognitive assemblages” (Hayles 2016), does the split EU proposal—one offering robots personhood making them liable for causing harm, and one offering neuro-rights to humans to protect them from computational neurotechnological harm—need to be unified?
David Murakami Wood (University of Ottawa)
Long abstract:
This paper presents the results of a comprehensive sociotechnological investigation into the publically accessible online archives of the Defense Advanced Research Projects Agency (DARPA), the research funding arm of the US military. It analyzes and categorizes the projects funded since 2010 and places them in the context of military doctrinal discourses. It argues that, despite the variety of projects and presentation of many of them as ethical and humanitarian, there are underlying ambitions in inlfuencing and controlling that brains of US soldiers, enemy combatants and civilians, consistent with longstanding tactical aims in US military doctrine. These ambitions link DARPA neuroscience research with projects vastly different in scale and scope but which all share an interest in creating what Paul Edwards described in the context of Cold War US military strategy as "a closed world," but which has more recently been termed "Full Spectrum Dominance" (FSD). FSD constructs a picture of the world as maleable, surveillance and controllable for US national security purposes at scales from the orbital to the molecular. Academia is deeply implicated in this project and research groups not just in American universities but across Five Eyes alliance nations and beyond have accepted DARPA funding to pursue projects that while technologically possible cannot be regarded as socially desirable, extended the possibility of "neurosurveillance" and control into the brain itself.
Henry Osman (Brown University)
Long abstract:
If the dream of posthumanism was immortality, the emerging computational paradigm is one of death. General purpose digital computers for large-scale AI are reaching their limit point in favor of new architectures like analog neuromorphic computation. Last year Geoffrey Hinton, one of leading figures of deep learning, argued that the future of AI was in its mortality. Hinton was referring to new research in neuromorphic computation in which hardware and software are one and the same. Data is stored as charges in memristors and floating gates rather than as code that can easily be transferred, such that when the computer “dies,” the data does too.
I take up Hinton’s provocation of neuromorphic “mortal computation” in which software and hardware are inseparable and designed together to chart a shift in the ontology of the chip centered on a rejection of the hylomorphic schema. I argue that this also constitutes a turn from software’s promise of immortal data, in the enduring ephemeral of the digital computer, to a mortal computation founded on temporary permanence. I do so by examining both Silicon Valley’s present fantasies of AGI and by turning to the late 1980s, when Carver Mead first began to develop neuromorphic computation at Caltech at the same time and in the same journals as deep learning and back propagation. By historicizing Hinton’s claims about the future of computing, I offer a different history of what I term AI’s mortal materialism and underscore neuromorphic computation’s beginnings in analog machine vision.
Raffaele Andrea Buono (UCL)
Short abstract:
This paper moves across three axes to examine 1. the ontological and epistemological stances re-made at the juncture between robotics and neuroscience;2. the power of, and need for, modelling in sustaining such project; 3. how such alliances are made successful by obscuring critical aspects of life.
Long abstract:
This paper draws from fieldwork in a robotics laboratory aligned with recent neuroscientific developments – the Free Energy Principle. FEP has gained increasing traction, growing from modest hypothesis to elucidate visual processing phenomena, to ‘theory of everything’ aiming to explain life.
I first elucidate FEP's intricacies, highlighting its connections to cybernetics. I suggest that the cybernetic project brought forward by FEP is radically different from the optimistic picture highlighted by Pickering (2010), rather configuring information as a statistically knowable object.
Secondly, I argue the principle’s explanatory power resides in its formalism, generating models which reduce life to causative processes. This formalism can be leveraged by roboticists, who can replicate them algorithmically, enhancing their ontological legitimacy. Despite this pragmatic alliance, I describe an obfuscated clash: the brain-machine entanglement is partial, since engineers look to models as tools to increase efficacy, rather than ontological proofs. Such different interests however often become blurry, as evidenced by an ethnographic vignette of a failed experiment. The objectivistic allure of modelling led roboticists into attributing failure not to a fundamental onto-epistemological fallacy, but rather to limitations of their implementation.
I thus highlight an inability on both sides to recognise modelling practices as producing vital models (Mahfoud et al., 2017). Instead, they push for a seductive, but dangerous, vision of life as it is produced by such modelling practices. By drawing on Simondon (2020) and Kauffman (2019), I conclude by outlining what it is that gets downplayed through such mind-brain entanglements mediated and made possible by modelling.
Guillaume Le Lay (Algorithmic Society Chair of the Multidisciplinary Institute of Artificial Intelligence (MIAI) - Université Grenoble Alpes)
Long abstract:
Drawing on a study of the development of French education in artificial intelligence, based on ethnographic observations of three university courses, I examine the way in which political, economic and industrial considerations are helping to rebuild the AI project, and consequently the place allocated to biological realism in the ordinary practice of teaching, learning, developing, perfecting and implementing contemporary AI algorithms.
I will show that biological realism and the heuristic ambition to learn more about the mind from AI experiments, despite the existence of ambitious but rare research programs in this field, are being sidelined in the training of future AI specialists in favour of an engineering conception of AI, centered on the search for instrumental efficiency.
As a result, even though ordinary Machine Learning and Deep Learning algorithms, which currently dominate the AI scene, can indeed be defined as a repertoire of mathematical and computational techniques historically inspired by the workings of the biological brain, I will argue that the neurobiological metaphor of "Artificial Neural Networks" through which these algorithms are currently described serves essentially didactic and promotional purposes, in a context where biological realism of algorithms is of secondary importance, if at all. Indeed, if current ANNs can be considered as "views of the mind", it is perhaps first of all in the sense of what Latour calls in French "vues de l'esprit" (1985), i.e. as a system of inscriptions through which current Machine Learning algorithms are described, taught, learned and promoted.