Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Kory Mathewson
(DeepMind)
Piotr Mirowski (DeepMind)
Luba Elliott (elluba.com)
S. M. Ali Eslami (DeepMind)
Send message to Convenors
- Format:
- Panel
- Sessions:
- Wednesday 8 June, -
Time zone: Europe/London
Short Abstract:
In this era of human-machine symbiosis, we explore what it means to live, think, and feel alongside machine enhancements to our cognition, physicality, and creativity.
Long Abstract:
The history of humanity is one defined by the inventions of new, strange tools that extend the mind [Clark and Chalmers 1998, Noë 2015]. The 21st century is witnessing a major transition, a disruption in the relationship between humans and machines [Smith and Szathmáry, 1995]. Machine learning technology is augmenting human cognition. As we enhance ourselves, we will need to answer questions about the value of our experiences, the right to modify ourselves, the ethics of engineering, the tension between human and machine creativity, and the definition of life itself. How will we respond to this new world of possibilities? Will we be able to overcome our anxieties and cultural biases to embrace a new age of creative symbiosis? And, how can AI help us to better understand each other, with big data holding a mirror of our cultural artifacts?
Creativity is a crucial aspect of human life. Many argue that endless, creative thought is a defining characteristic of human intelligence. Several recent works have shown how machine learning can be used to produce creative outputs in collaboration with humans. In this panel, we will present recent developments in machine learning applications in the arts. The panel will include a general overview of creative processes from the perspective of generative systems. We will present examples from different AI-generated domains: music, art, poetry, improvisation, theater, and psychotherapy. We will explore visions of the future of human-machine creative symbiosis and we will discuss the potential challenges and benefits of human-machine co-creativity.
Accepted papers:
Session 1 Wednesday 8 June, 2022, -Paper short abstract:
Disputing intelligence as traditionally pertaining to the tech industry, I research idiotic machinic agents imbued with curious intelligences in order to reconfigure the human-machine encounter. The open-ended absurdity of idiocy might facilitate creative interactions beyond automated functionality.
Paper long abstract:
Human-machine interactions are usually centered on a specific kind of intelligence that meticulously designates roles, agencies and operations. However, this intelligence is a human-centered and tech-driven construct deficient of encapsulating the complexity of human and non-human diverse ecologies pertaining to our world. The intelligence embedded within advanced tech entities might be commendable in the sense of being able to converse with human cognition but it is nonetheless limited to that which can be algorithmically computable for the sake of operationality and the tech industry’s evolution. On the everyday scale, this suggests both limitedness in human-machine interactions, but also a deprivation in the creative potentialities of these interactions. It shouldn’t be only intelligence that characterises human-machine encounters, but absurdity, creativity and controversy too.
What if human-machine interactions operated on the opposite of intelligence? Looking at idiocy, as that which lies outside the norm and speaks from a non-deterministic stance open to potentialities rather than measurable facts, I am looking at 'idiotic' human-machine encounters beyond automated functionality. Through participatory methods engaging multidisciplinary designers in collective speculation around ‘other-than-functional’ machines, I channel their feedback into explorations with materialities to devise 'idiotic' machinic agents. Idiotic artefacts of unpolished form are imbued with absurd intelligences exhibiting curious behaviours in order to explore how they might foster ingenious, non-pre-scripted interactions. I explore the unquantifiable absurdity of the human-machine symbiosis and its co-constituted agency in impacting the everyday creatively. I research creative interactions beyond the gallery space, taking place within the everyday realm of the domestic.
Paper short abstract:
This paper presents an example of a co-mingling of creative agents, where the creative process is an entanglement of human and human-machine logics. I argue that framing creative processes as sympoetics productively decentres human agents and expands the possible futures of human-AI creativity.
Paper long abstract:
This paper presents an example of a co-mingling of creative agents. A short video essay-poem is used as the starting point of an exploration of GoogleTranslate/GoogleLens as AI-based creative agents. These current artificial intelligence-based technologies are used to re-render ancient maps; where mapping is understood as a socio-historically embedded practice. Translations are then incorporated into a video-art based poem. Through experimentation and haphazard poetics, a co-mingling of creative literary agents bound up in human and human-machine logics emerges. Using this example, I argue that rather than symbiosis, this creative process can be framed as sympoesis (Haraway 2016); and that AI-human creativity (understood as sympoetics), allows for a decentring of the role of human agent in creative work. This in turn, opens up for the disavowal of technology as merely tool, and expands the possible futures of human-AI creativity.
Paper short abstract:
This presentation shares early reflections on a collaborative project combining both cognitive science and philosophy approaches with theater and immersive experience design to yield new insights into how humans and AI systems relate in the context of a self-driving car.
Paper long abstract:
In this presentation we reflect on our collaborative project combining cognitive science and philosophy approaches with theater and immersive experience design to yield new insights into how humans and AI systems relate. The context for our project is the near-term future of self-driving vehicles, and the complexity this change will bring in large-scale system shifts in infrastructure and industry, and changes in the way humans and cars will relate. As cars transform, new paradigms will be needed for understanding how human and vehicle interact. Specifically, we set aside notions of human-technology interaction as rigid input/output relationships. Instead, we co-explore a future where vehicle and mobility systems are one of many adaptive elements in the soft-assembled potentiality of future human-technology systems (Kelso, 1995).
Inspired by research on life after the car (Dennis & Urry 2009), the sociology of car cultures and mobilities (Miller 2001, Grieco & Urry 2011) and Parvin and Pollock’s research (2020) into the rhetoric of the unintended consequence, we draw out possible future perspectives on our actions today, through imaginative, participatory simulation. Participation in our VR car simulator invites visitors to contribute their own memories of the car as vehicle for identity, culture, and freedom, offering a potential future strategy for memorializing the car. These stories are used to generate AI text, creatively voiced by a human improv actor portraying the car AI, to enact a speculative future of how humans and AI systems (including in vehicles) will relate to one another, through the frame of shared experience.
Paper short abstract:
Technology, like AI, is present in the generation and the distribution of culture. How do artists exploit neural networks for creative purposes and what impact have these algorithms on contemporary practices?
Paper long abstract:
Through practice-based research methods we have been exploring the potentials and limits of current AI technology, more precisely neural networks in the context of image, text, and form. From the proof of concept, deep learning (DL) has evolved to a tool that is applied for art production. Even more, we see a specific genre or nish emerging that specifically concentrates on art made with AI.
In terms of DL development, in relatively short time the generation of high-resolution images to 3D objects have been achieved. What is more exciting, there are models, like CLIP and text2mesh, that do not need the same kind of media input as the output. The first one is the text-to-image model. Such twist contributes towards creativity arousal, which manifests itself in art practice and feeds back to the developers’ pipeline. Yet again, we see how the artists act as catalysts for technology development.
Such novel creative scenarios and processes are enabled by not only available AI models but by hard work behind implementing these new technologies into real-time and autonomous applications with custom-made data sets and algorithms. AI does not create a ‘push the button’ masterpiece but requires quite a deep understanding of the technology behind and a creative mind to come up with high-quality work. Our previous research has shown that the most interesting and valuable results are achieved when DL tools are combined with human input. Thus, AI opens new avenues for inspiration and offers novel tool sets but fails to automate creativity.