Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Kate Hennessy
(Simon Fraser University)
Gabriela Aceves Sepulveda (Simon Fraser University)
Send message to Convenors
- Discussants:
-
Trudi Lynn Smith
(University of Victoria)
Freya Zinovieff (Simon Fraser University)
Steve DiPaola (Simon Fraser University)
Prophecy Sun (Emily Carr University of Art Design)
Cecil Brown
Tylar Campbell (Simon Fraser University)
- Format:
- Roundtable
- Sessions:
- Wednesday 8 June, -
Time zone: Europe/London
Short Abstract:
Toward an anthropology of the multimodal, we present research-creation works engaging AI that represent, visualize, and interrogate expressions of gender, race, histories, relationalities, materiality, and cultural forms for exhibition and dialogue across disciplinary boundaries.
Long Abstract:
In this roundtable, members of the Making Culture Lab and the CriticalMediaArtsStudio (cMAS) at Simon Fraser University, with collaborator and discussant Steve DiPaola, will present recent works engaging AI that represent, visualize, and critically interrogate expressions of gender, race, histories, relationalities, materiality, and cultural forms for exhibition and dialogue across disciplinary boundaries. As a method increasingly visible in anthropological work, research-creation brings together artistic and scholarly methodologies and legitimates hybrid outputs (Loveless 2015). It raises questions about the reshaping of artistic research into an academic discipline (Steryl 2010), and asks what is at stake in pedagogy, practice, and experimentation (Manning 2016).
Responding to the growing prominence of the multimodal in anthropology, we present research-creation artworks that act on calls for "an anthropology of the multimodal" (Takaragawa et. al. 2018; Smith and Hennessy 2020) that engages the multimodal's position as an expression of technoscientific praxis and infrastructures, including artificial intelligence. This work shows how the multimodal, which is deeply intertwined with increasingly ubiquitous AI, is complicit in the reproduction of power hierarchies and forms of oppression but may also forge fugitive pathways to new ways of seeing and sensing. Works include engagements with oral history, race and fugitive spaces; machine vision and classification of the material; facial recognition and surveillance; AI-generated narrative, sonic decay, and poetry; and gendered bodies, kinship, and bacteria. They point to possibilities for anthropology to critically engage with technoscience in critical practice and the place of research-creation as a method for an anthropology of the multimodal.
Accepted papers:
Session 1 Wednesday 8 June, 2022, -Paper short abstract:
Sonic Ecologies of Bodies and Place: Multimodal Narratives describes a series of collaborative projects that incorporate video, sound, performance and AI tools to pose questions about the complex material, ecological and temporal entanglements between humans, non-humans and technology.
Paper long abstract:
Sonic Ecologies of Bodies and Place: Multimodal Narratives describes an iterative collaborative process that has been taking place since 2019 between artists Gabriela Aceves Sepúlveda, Steve DiPaola, prOphecy sun and Freya Zinovieff. Through a series of visual, sonic and performative works, the artists interrogate the relationships between human and non-human bodies in relation to the spatially distributed results of colonial administrative practices such as the climate crisis, borders, and neoliberal capitalism. Amongst this interdisciplinary collaboration, the artists utilize AI and machine learning processes as creative tools which are markers of DiPaola’s previous practice. Not seeking to provide answers, rather, to pose questions, the artists weave together stories that traverse disciplinary boundaries and draw together multimodal narratives that highlight AI as a fifth entity in the collaborative process. They also utilize AI and machine learning processes in order to critique the complex material and temporal entanglements that scholars of technology find themselves in during this age of escalating ecological disaster.
Paper short abstract:
white clouds in blue sky juxtaposes a performative engagement with the materiality of gallery refuse with the poetics and politics of AI and machine vision, where humans and machines increasingly mutually constitute, reinforce and rewrite classifications and meanings of things.
Paper long abstract:
white clouds in the blue sky is a three-channel video installation that juxtaposes a performative engagement with the materiality of gallery refuse with the poetics and politics of machine vision. The artists methodically construct a sculptural heap of utilitarian objects like stacks of chairs and scrap materials that have been gathered after an exhibition and are destined for the landfill. As they create and then deconstruct the pile of mundane and broken objects, these assemblages are interpreted by the DenseCap machine vision and description system, which is confounded in its attempts to accurately identify and interpret assemblages of objects created. This video work highlights tensions between individual human structures of memory and imagination, and contemporary computational image recognition systems. By drawing attention to current limitations of machine vision in recognizing and describing objects, the work points to significant possibilities and difficulties as humans and machines increasingly mutually constitute, reinforce and re-write classifications and meanings of things. How will machines read images and artworks in the future, and what stories will be told about them? What stories will humans be able to tell and imagine in the future, in relation to new intelligent storytelling machines? What kind of planet will we inhabit? Will the skies be blue? Will the clouds be white?
Paper short abstract:
Welcome To The Metaverse is a work of Multi-Modal Ambivalence that humorously uses Instagram’s augmented reality face filters to engage users while at the same time encouraging them to be deeply skeptical of the shared digital environment of the Metaverse.
Paper long abstract:
You have been sent a face filter that replaces your eyes and nose with those of other users of Instagram. A narrated voice commands that you blink, which signals your consent to the terms of service agreement. Instagram face recognition AI notices you frowning upon reacting to the news that Mark Zuckerberg has earned more than 12,000 in the 30 seconds or so that you have been using the face filter. You are given some cryptocurrency as an apology. You have been using Welcome To The Metaverse, a satirical work of research-creation that explores the politics of The Metaverse using augmented reality face filters inside Meta’s Instagram platform.
Welcome To The Metaverse responds to Astacio et al.’s manifesto for engaged makers to create work for “S@!#t Times” (2021). It is a work of Multi-Modal Ambivalence that humorously uses augmented reality filters to engage users, while at the same time encouraging them to be deeply skeptical of the shared digital environment of the Metaverse. The work also follows recommendations by Ina Sander for ways to increase Critical Big Data Literacy (2020) using interactive and personalized media. In the tradition of culture jamming and situationist detournement, it is an act of semiological guerilla warfare (Eco, 1986) against an increasingly centralized and extractivist social media environment.
Paper short abstract:
This panel explores George Moses Horton as an oral poet, from Oral Horton To Written Horton, and Now Digital Horton. This concept of “experiencing” is entirely new in fiction narration. Through the creation of an Acoustic Avatar, we demonstrate how the user can “immerse” oneself in Horton’s poetry.
Paper long abstract:
This panel explores George Moses Horton as an oral poet, from Oral Horton To Written Horton, and Now Digital Horton. Using the tools of digital technology such as virtual and augmented reality, we can transport the reader (viewer) back to the Antebellum Chapel Hill, in the 1830s.
This concept of “experiencing” something—of having an “experience”—is entirely new in fiction narration. This is because narration had been limited to literacy, books, and reading. Here was a new way to “experience” a narrative—without reading. Like Torben Grodal, Professor of Film and Media studies at the University of Copenhagen, we will present Horton's poetry as a role-playing game (RPG) that is like a “real-life experience”. We ask, how might it be possible to use virtual reality to do just that—to “experience” Horton’s world as an 'embodied' experience”? We focused on analyzing the activities of a player through three angles: character creation, character interaction, and game mechanics.
In collaboration with Dr. Cecil Brown from Stanford University’s Center for Spatial and Textual Analysis (CESTA), Ph.D. candidate Tylar Campbell from Simon Fraser University’s School of Interactive Arts and Technology (SIAT), and Dr. Steve DiPaola at the SIAT iVizLab, we were able to create a virtual space for Horton and occupy this immersive experience. In our demonstration, we create an Acoustic Avatar of George Moses Horton to show how the user can “immerse” oneself in Horton’s poetry.