Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Sarah Pink
(Monash University)
Emma Quilty (Monash University)
Debora Lanzeni (Monash University)
Kari Dahlgren (Monash University)
Send message to Convenors
- Format:
- Panel
- Sessions:
- Thursday 9 June, -
Time zone: Europe/London
Short Abstract:
This panel creates an Interdisciplinary Futures focused AI Anthropology, whereby anthropologists might collaborate and shift the narratives in futures-focused spaces where other disciplines currently dominate.
Long Abstract:
The panel calls for papers, films and other media from anthropologists interested in creating a new Interdisciplinary Futures focused AI Anthropology. AI is becoming an inevitable part of life and we need to develop new capacities for anthropologists to work in interdisciplinary futures-focused spaces where other disciplines feel at ease. Our ambition is to develop a high profile publication based on this panel.
We wish to engage in, contest and shift dominant discourses where AI inhabits a future shaped and visioned by techno-solutionist politics and capital flows. Here futures are visioned through existing and anticipated engineering advances in AI capacity, the rise of the consultancies' (Shore & Wright) predictive audits which frame AI as a techno-solution to societal, industry and policy problems, and the short-termist visions of governments complicit in digital capitalism. This context is underpinned by an extractivist approach to ethics, which assumes that if future autonomous, intelligent and connected technologies (eg. such as self-driving cars, digital assistants, robotic workers) are invested with human ethics then people will trust, accept and adopt them, thus enabling predicted futures.
The panel will bring together anthropologists with ambitions to participate theoretically, ethnographically, experimentally and interventionally in interdisciplinary and multistakeholder spaces where futures are envisioned. We are open to different ways of approaching this, but seek to build an engaged and interdisciplinary Futures Anthropology (Pink & Salazar 2017) to undertake anthropology with and in possible futures, to interrogate AI ethics, and which has an ethics of anthropological care and responsibility at its core.
Accepted papers:
Session 1 Thursday 9 June, 2022, -Paper short abstract:
This paper describes three women who variously reject voice assistants in their homes. It shows how their experiences illustrate an incompatibility between forms of control that are embedded into the design of digital voice assistants and 'silent control' operationalised by women in their homes.
Paper long abstract:
Digital voice assistants’, such as Siri and Alexa’s, feminine personas perpetuate gender stereotypes, not only in their portrayal of submissive, flirtatious, or incompetent ‘bitches with glitches,’ (Strengers and Kennedy 2020) but also through user interactions which can exacerbate gender imbalances in the home. This paper focuses on the reflections of three women who variously reject engaging with voice assistants in the context of a larger ethnographic study on digital and energy technologies with 72 households across VIC and NSW. While voice assistants promise to bring increased control and comfort to the home, participant experiences with voice assistants illustrate an incompatibility between forms of control that are embedded into the design and marketing of automated smart home technologies, particularly digital voice assistants, and the form of domestic control that are operationalised by women in their homes. While the ‘invisible work’ (Daniels 1987) of women’s domestic labour has long been recognised, this paper suggests a form of ‘silent control’ that is fundamental to this invisibility, and which is contrasted with the vocalisation of commands which drive voice assistants and household automation. Such reflections reveal broader relationships between gender, households, and control, as well as how these relationships are affected by the adoption of automated smart home technologies. Further, the paper suggests that anthropological understandings of power, including its subtle manifestations, can contribute to the design of more inclusive AI and smart technologies.
Paper short abstract:
With bushfire smoke, asthma thunderstorms, allergens, and COVID-19, Australia's air is increasingly feared, while innovations in sensors, air quality analytics advance, and air filter/purifier use grows. But should we be protecting ourselves from our air, or protecting our air from us?
Paper long abstract:
The possibilities for automating our air are growing, as innovations in sensors, air filtration and purification technologies, air quality analytics and predictions advance. Simultaneously the air (in Australia) is increasingly positioned as dangerous and is feared as bushfire smoke, asthma thunderstorms and other allergens, and airborne virus like COVID-19 assemble as part of the composition of everyday air, in cities and in and around homes. In tandem sales of domestic air filtration and purification technologies are increasing, while they are being installed in Australian schools, and internationally being marketed as part of luxury cars (Pink 2022).
This paper argues for attention to how such technologies might realistically and ethically become part of our everyday futures. In doing so it draws on design ethnographic futures research, which explores how people imagine possible future air technologies in their own homes: how do people envisage these technologies? what tasks would they perform? What values relating to safety, care and relationships do they articulate; what roles could automation and AI play in ethical, inclusive air futures?
In doing so we also respond to dominant narratives surrounding emerging smart and automated air technologies and the solutions they are promised to deliver.
Paper short abstract:
Rather than interrogate and challenge narratives around the impact of science and technology on everyday cooking practices as imagined by designers and marketers, this paper adopts a reverse perspective and argues that domestic kitchens are ideal places from whence to study how futures are made.
Paper long abstract:
Contemporary kitchens are increasingly smart. Wired food processors offer a choice of recipes and prepare food for busy cooks, while smartphone apps propose meals or shop online. Whereas designers and marketers still seem to be imagining the futuristic kitchen in the (not quite yet) smart home, domestic cooks are already making the future from within their kitchens. Digital food processors such as Thermomix, various smartphone apps and meal-kit providers such as HelloFresh are being used since years in domestic kitchens across the globe. Rather than interrogate and challenge oft-repeated narratives around the impact of science and technology on everyday practices such as domestic cooking as imagined by designers and marketers, this paper adopts a reverse perspective and argues that contemporary domestic kitchens are ideal places from whence to study how futures are made. Domestic cooks, as they shop, process, prepare and/or eat food, engage with and contribute to (digital) knowledge on a daily basis, yet they are seldom considered technological pioneers or future-making figures. Based on ongoing ethnographic research in diverse kitchens in Frankfurt and the Rhein-Main region in Germany, this paper advances the notion of the cyborg cook (following Donna Haraway’s cyborg manifesto) to critique and overcome commonly held assumptions about, first, domestic cooks and, second, who makes the future in the context of everyday life.
Paper short abstract:
AI systems construct sterreotypical and biased understandings of human experience and anthropology is best suited to challenge their implicit human reductionism. Although the discipline can play a key role in the future of AI ethics research it will need to design more interdisciplinary projects.
Paper long abstract:
With the rise of AI driven technologies, algorithms have replaced paperwork in the construction of social truths (Graeber, 2016); they build truths about who we are, our cultural worlds and our identities. Anthropologists have discussed the implications of big data as meaning construction (see Boellstroff and Mauer, 2015), the powerful discourses of algorithms as culture (Dourish, 2016; Seaver, 2017) or the multiple ways in which people are negotiating with data narratives in everyday life (Barassi, 2017, 2020; Pink et al., 2018; Dourish and Cruz, 2018). However, much more research is needed on the human reductionism implicit to these systems, and the western-centric and biased visions of human nature implicit to these technologies. This paper brings the findings of a three-year ethnographic project on the profiling of children from before birth (Child | Data | Citizen Project, 2016 - 2019) together with the findings of a (non-anthropological) research project aimed at analyzing the discourses around algorithmic profiling in Europe and the critical practices that are emerging against it (The Human Error Project, 2020 – ongoing). The paper will argue that anthropology has a fundamental role to play in the future of AI ethics research and the study of algorithmic profiling. The discipline reminds us that ideas of human nature are not only social and cultural but also political constructions (Sahlins, 2008; Graeber and Sahlins, 2017). Yet to succeed it will need to build projects that are truly interdisciplinary, which consider data-structures, policies, as well as popular media discourses.