Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Noopur Raval
(UCLA)
Shazeda Ahmed (University of California, Los Angeles)
Maya Indira Ganesh (University of Cambridge)
Mashinka Firunts Hakopian (ArtCenter College of Design)
Xiaowei Wang (UCLA)
Rida Qadri
Send message to Convenors
- Discussant:
-
Ranjit Singh
(Data Society Research Institute)
- Format:
- Combined Format Open Panel
- Location:
- Theater 2, NU building
- Sessions:
- Tuesday 16 July, -, -, -
Time zone: Europe/Amsterdam
Short Abstract:
This combined open format panel curates a set of research presentations that discuss the various global socio-technical imaginaries of AI. The aim is to show how AI systems are being interpreted as social and political intermediaries and what futures they are enabling in turn.
Long Abstract:
In their introduction to the special issue titled “Future Imaginaries in the making and governing of Digital Technology…” Mager and Katzenbach argue how multiple contested visions and imaginaries of technological futures are always in the process of being articulated, “collectively held and institutionally stabilized.” The field of STS is rich with discussions of ‘sociotechnical imaginaries’ (Jasanoff and Kim, 2009, 2015) as an analytic that helps us attend to how techno-scientific development is constantly enrolled in and intertwined with global, national, governmental as well as communal visions of the future. For ‘Artificial Intelligence’ as well, visions for the future - imaginations of collective life after AI, policies for jobs, healthcare, education etc. are being performed through talk, metaphors, documents, advertisements, public and private deliberations and more. In that sense, how technical objects and systems eventually get stabilized as social artefacts requires unpacking the material, discursive and speculative work of future-making ongoing around them. Importantly, socio-technical imaginaries become key drivers of social, political and economic responses at macro and micro levels as they fuel utopic and dystopic visions of social life and serve as bellwethers of new opportunities or threats to collective and individual projects of crafting the good life.
This combined format open panel brings together a selection of scholars addressing the global socio-technical imaginaries of AI in formation. Scholars in this session will talk about the work of stabilizing AI objects as social, economic and political artefacts from various contexts spanning corporate innovation, arts practice, national security and rights discourses among others. The combined format panel will open with a ‘grounding’ introduction by panel facilitators and flow into dyadic conversations between panelists ending with an open conversation. Accepted panelists will be asked to submit a one-page position paper in preparation and are encouraged to bring multimedia artefacts to their presentations.
Accepted contributions:
Session 1 Tuesday 16 July, 2024, -Short abstract:
The investigation of neoliberal aesthetics of mimicry and neutralisation perpetuated by “digital human”, “character creator” apps (Uneeq, Deepbrain AI Human), avatar generators and virtual presenter apps with a specific focus on the questions of affective labour, attention economy and AI figuration.
Long abstract:
Human interaction with the algorithms primarily takes place in the realm of aesthetics and affect, where they are inevitably figured and imagined in different ways, from the elements of the interface to anthropomorphic entities that we perceive as autonomously operating: chatbots, AI companions and other Alexas, Replicas and Tays. Their active participation in culture and economy through appearance of agency is undisputable: virtual influencers increase profits and other AI characters become lightning rods for corporate responsibility (such as Tay (Anikina 2020)), promising a future increase in diverse types of parasocial (Elvery 2022) relationships with AI-powered figurations.
Building on my work on ‘procedural animism’ (Anikina 2022) exploring the significance of AI figures through the lens of decolonial and feminist STS, in this paper I want to pursue the question of how “digital human” and “character creator” apps, AI avatar generators and virtual presenter apps perpetuate neoliberal aesthetics of mimicry and neutralisation. I aim to question a particular kind of hollowing out of representation and its capture by affective infrastructures of corporate websites, professional social media networks and advertising avenues. I will consider a series of case studies of aesthetic homogenization or neutralisation (favoring “universally” appealing characteristics over nuanced representation) and mimicry (conforming to the ideas of “professional” or “trustworthy”) in the larger sociocultural contexts of generative AI.
The proposal is connected to a body of artistic audiovisual work; related project is ‘The Chronicles of Xenosocialist AI’ https://www.are.na/medialab-matadero/pro-03-the-chronicles-of-xenosocialist-ai.
Short abstract:
GenAI proposes a new digital information system of concealment, reversing trends of revelation that were normalised with the Googlization of the world. Information concealment is often read as an act of political emergency but I re-read it as an act of queer authoring from an 'Asian' context.
Long abstract:
The establishment of search as the de facto logic of organising digital information networks, foregrounded 'revealing' as one of the most prominent and singular focus of digital authorship. To be a digital author was to reveal - from outing narratives to whistleblowing. The radical emphasis on transparency and revelation resulted in any form of information withholding - censorship, redaction, information shaping, blackouts, or shutdowns - being seen as an act of political emergency. This paper argues that GenAI is destabilising this stronghold of revelation, and replacing it with concealment as a new form of digital authorship. This concealment is not just act or erasure of information but of new modes of informational expression, sharing, and exchange. I unpack and re-read this concealment - what the GenAI does not show or reveal in its informational models as embedded in three different histories and imaginaries of concealment in the 'Asian' context - Queerness, public authorship, and practices of detection. Drawing from historical imaginaries and practices in India, Hong Kong, and Taiwan, I offer a different way of understanding GenAI emergence as a moment to imagine future affordances and safeguards that are still waiting to be imagined.
Short abstract:
This paper explores Queer representation in Chinese-English machine translation, employing a focused case study on the DeepL. we seek to unravel the complexities surrounding the queerness of NLP, questioning the underlying biases and assumptions embedded in machine translation algorithms.
Long abstract:
This paper explores the complex intersection of Queer and Natural Language Processing (NLP) within the realm of Chinese-English machine translation, employing a focused case study on the popular DeepL translation tool. Our investigation reveals significant differences in Queer identity representations, especially when translating Chinese terms related to homosexuality and Queer.
The key to our analysis is the apparent mistranslation of DeepL, which interprets the Chinese word " homosexual " as " homophobia ". Furthermore, the tool consistently performs poorly in accurately rendering the word "Queer" across various contextual dimensions. This paper critically examines the implications of such misrepresentations and their potential impact on communication, identity expression and social understanding.
Through an exploration of the DeepL case study, we seek to unravel the complexities surrounding the queerness of NLP, questioning the underlying biases and assumptions embedded in machine translation algorithms. In dissecting this case through the lens of socio-technical imaginaries, we delve into the embedded cultural and societal assumptions that shape AI algorithms. By scrutinizing the inadequacies of current AI models in handling diverse and evolving expressions of sexuality and identity, this paper contributes to the broader discourse on inclusivity, representation, and ethics within the field of Science and Technology Studies (STS).
Short abstract:
The EU AI Act is nearly there, but AI is far from being off the tables of public debate or policymakers. Drawing on data around the desirable approaches to the future governance of AI in the EU, this article investigates the multiple and contested socio-technical imaginaries of EU policy experts.
Long abstract:
Recent years have seen an accelerated proliferation of guidelines for policy and governance from the OECD and UNESCO to varied national approaches. For example China, India, the United States, the United Kingdom and the European Union have all taken strides in setting national policy frameworks to both curb purported risks but also gain ground in “the global AI race”. What is striking about these varied approaches is that they are an amalgamation of policy and discourse (Bareis & Katzenbach, 2021) – a coproduction of imaginaries, narrative construction and governance.
At the same time, the socio-technical imaginaries (STIs) reflected within these approaches are not fully stabilized singular monoliths. This article takes as its starting point the continuing multiple and contested STIs (Mager & Katzenbach, 2021) within EU policy circles around artificial intelligence. The aim is to explore and conceptualize the multiple imaginaries at play in EU policy circles at a time where a key institutional milestone, the EU AI Act, has been achieved but there are still uncertainties and gaps as to what future governance of artificial intelligence within the EU should look like. Combining a theoretical lens of STIs and the sociology of expectations (Borup et al 2006; Brown and Michael 2003), I draw on a two-round Delphi survey for EU policy experts to interrogate the imaginaries and expectations of AI, highlighting the tensions between European “technological sovereignty”, the desire for AI to be aligned with European democratic values, the democratic governance of technologies and the desire for competitiveness.
Short abstract:
In this project, we propose to submit a one-page position paper and multimedia website prototype that reflects on the lessons learned from attempted solidarities that were forged at the “Resisting Big Tech & Casteist/Racial Capitalism” workshop (Jan 2024).
Long abstract:
Solidarity requires suturing these open wounds and attending to each other’s pain without what Frank Wilderson describes as “the ruse of analogy” (meaning sincere intimacy requires us to differentiate between anti-Blackness and caste violence)"
– Logic(s) Magazine Editorial Team
On January 25-26, 2024, the Ida B Wells Just Data Lab (JDL) & Criminal Justice & Police Accountability Project (CPA) co-organized a virtual workshop entitled “Resisting Big Tech & Casteist/Racial Capitalism.” With twenty participants and live translation in Hindi and English, our aim was to flesh out the possibility of transnational convergences between brahminical and anti-Black surveillance. We hoped to challenge the predominantly casteless framing of technology in India, in pursuit of a broader vision of anti-caste and anti-racist digital futures.
A primary motivation for this workshop was to push back against the reductive framings of “caste” and “race” as merely interchangeable categories and avoid over-determining solidarities by attending to their respective contexts with care. Despite this goal, we found ourselves reproducing this “ruse of analogy” and inadvertently collapsing disparate geographies to artificially forge solidarities against global AI policing. It became clear that a meaningful, contextually-grounded, and multilingual lexicon for these conversations is yet to be created.
In this project, we propose to submit a one-page position paper and multimedia website prototype that reflects on the lessons learned from these attempted solidarities. The website will include excerpts from the workshop’s participants on the (im)possibilities of global AI solidarity, displayed in both Hindi and English to underscore the difficulties of transnational meaning-making.
Short abstract:
Through a qualitative multimodal analysis of the websites of leading Chinese AI companies, we identify a cohesive sociotechnical imaginary of machine vision, and explain how four distinct visual registers contribute to its articulation.
Long abstract:
Machine vision is one of the main applications of artificial intelligence. In China, the machine vision industry makes up more than a third of the national AI market, and technologies like face recognition, object tracking and automated driving play a central role in surveillance systems and social governance projects relying on the large-scale collection and processing of sensor data. Like other novel articulations of technology and society, machine vision is defined, developed and explained by different actors through the work of imagination. In this article, we draw on the concept of sociotechnical imaginaries to understand how Chinese companies represent machine vision. Through a qualitative multimodal analysis of the corporate websites of leading industry players, we identify a cohesive sociotechnical imaginary of machine vision, and explain how four distinct visual registers contribute to its articulation. These four registers, which we call computational abstraction, human–machine coordination, smooth everyday, and dashboard realism, allow Chinese tech companies to articulate their global ambitions and competitiveness through narrow and opaque representations of machine vision technologies.
Short abstract:
AI safety expertise performs visions of future AGI that centre the planetary, rather than human life. This paper examines how it entails visions of global politics, both remaking and strengthening dominant visions of world order.
Long abstract:
Advocates of AI safety claim that future AI capabilities may pose an existential risk to humanity, and that scientists, governments and technology companies should ally in paving the way to a desirable future with artificial general intelligence (AGI). This paper examines how technology companies, AI researchers, philosophers and philanthropists have contributed to the making of superintelligent AI futures since the early 2010s through a combination of public science, technology scenarios, risk-assessment tools, moral theories, machine learning models and highly abstract mathematical formalisations of AGI, showing how contestations around, and stabilisations of superintelligence scenarios contribute to recasting transhumanist, science fiction and computer science imaginaries of machine intelligence explosion and singularity. Specifically, the paper studies how AGI futures contribute to the recasting of world order visions, as evidenced in ideas of ‘global superintelligent Leviathan’ (Bostrom 2014) and planetary ‘stack’ (Bratton 2015). Departing from common understandings of AGI as an anthropocentric project, I argue that non-human life (on earth or on other planets), resource depletion, and the long-term habitability of the cosmos constitute in fact central themes in the AGI and superintelligence discourse. Tracing these within ethical debates about existential risk and extinction from the 1960s on, I contrast current visions of AI catastrophe with prospective scenarios of epochal transformation such as nuclear winters and extreme Malthusian conditions. I show how AI safety expertise centres the planetary as the relevant scale of AI making and political intervention requiring unprecedented coordination and integration, and how it contributes to destabilising and consolidating existing global hierarchies.
Short abstract:
My contribution examines the imaginaries generated to support advances in AI in the cybersecurity domain and the ways these epistemic claims and future visions can omit the differential vulnerabilities that contribute to insecurities in the first place.
Long abstract:
Generative artificial intelligence has been heralded as a ‘democratizing force’ in cybersecurity for how AI-based solutions can detect and respond to known and unknown threats in real-time with minimal human intervention. My contribution considers how automating repetitive tasks like data collection, extraction, and threat search and detection can also automate a normative bias regarding what constitutes risks and threats and how to mitigate them.
Three cases are examined to trace the imaginaries generated about the future to support advances in AI: IBM’s Watson for Cybersecurity, CrowdStrike’s Generative AI Security Analyst Charlotte AI, and Google’s security large-language model Sec-PaLM. I examine the imaginaries to excavate what I call “automating insecurities.” How do the future imaginaries by IBM, CrowdStrike and Google about AI contain and corral collective notions of (in)security? How do these firms stabilize AI inevitability narratives through epistemic claims and future predictions? What can an analysis of the relationship between innovation and containment in the discourses of AI inevitability in cybersecurity reveal about variants of non-technical insecurities? How do these imaginaries bolster the aspirations of these firms to dominate in the “AI arms race”?
Cybersecurity is often presented as a set of universal measures to protect computer systems, networks, and digital infrastructures. However, how cyber insecurities are understood and experienced is not universal. My contribution examines the imaginaries generated to support advances in AI in the cybersecurity domain and the ways these epistemic claims and future visions can omit the differential vulnerabilities that contribute to insecurities in the first place.
Short abstract:
This paper examines the practice of AI testing in Australian communities, connecting contemporary practices of new technology testing (drones, autonomous vehicles) with longer histories of scientific experimentation in the settler colonial nation.
Long abstract:
Australia has long been a site for scientific experimentation. This history spans innovations in carceral methods in convict sites like Port Arthur, medical experiments performed on Aboriginal people in the central deserts (what is known as Australia's Tuskegee experiments), and the testing of nuclear weapons in territories such as the Montebello Islands, Maralinga, and Emu Fields. From colonisation to the present day, the Australian land and its people have been treated as a ‘low risk’ site for the empirical testing of high-risk theories and procedures. Most recently, Big Tech companies have begun to experiment with the country’s potential as a testbed for new features and products. The streaming service Spotify used Australia as a testing site for its then experimental Discover Weekly playlist. The dating app Tinder piloted features like Tinder Social and Super Like in the Australian market before releasing them globally. And Facebook trialled its 2018 upvote downvote feature first on users based in Australia and New Zealand. In the context of new technologies, such AI, Australia has been described as a ‘sandbox for innovation’, an ‘ideal testing ground’ for experimentation, the perfect ‘petri dish’ for global businesses to trial new features before opening them to primary markets. Drawing on historical analysis and fieldwork based in testbed communities (drone delivery and autonomous vehicles), this paper asks: how did Australia come to be treated as a global technology testbed? Who becomes the test subjects for these experiments? And what are the impacts of testing on communities and environments?
Short abstract:
How did words become vectors—what historical, technical gestures enabled and accelerated the growing reverence for probabilistic language and spatialized knowledge that we see in LLMs today, and how can this be undermined through interdisciplinary, intersectional, material reconsiderations of LLMs?
Long abstract:
How do language models represent and reproduce difference, as a fundamental aspect of their operations and imaginaries? How might they ‘know’ difference differently? This contribution examines two contrasting socio-technical imaginaries. One does not yet exist, but is prefigured by and latent within the other. The other currently prevails in the explosion of large language models (LLMs), but its roots grow from 20th century aerial weapons development, eugenicist statistics, cryptography, behavioral psychology, phonology, linguistics, and cybernetic research centered in the US, UK, Germany, and Russia.
This imaginary matters because the decisions that produced LLMs have also determined what language models understand about difference. These key reductive moves—reducing similarity to equivalence, reducing proximity to similarity—take place as small technical gestures one might barely call decisions. Through basic but compounded operations, they still determine how LLMs inscribe difference into their outputs and onto bodies across the world, from those using their interfaces to those laboring to moderate their content.
If the language model’s proximity or ‘nearness’ has been the foundation for assumption-making, let us build computational ‘nearbyness’ that resists this reduction of complexity and entanglement, that brings us close without collapsing distinctions. I take up Trinh T Minh Ha’s ‘speaking nearby’ to prefigure a contrasting socio-technical imaginary: ‘Nearbyness’ replaces knowing-as-classifying or -conquering with understanding through curiosity, commitment, and relation. Practically, this means returning to the material practices of LLMs with new protocols, reclaimed histories, and intersectional methodologies—to move from foundation models built on classificatory logics toward more transformative models that might unravel them.
Short abstract:
As the promises of artificial intelligence attract growing social, political and financial attention, risks and responsibilities are being imagined in ways that serve the interests of a technoscientific elite.
Long abstract:
As the promises of artificial intelligence attract growing social, political and financial attention, risks and responsibilities are being imagined in ways that serve the interests of a technoscientific elite. In the UK and elsewhere, organisations are starting to institutionalise a mode of governance that presumes to know and take care of public concerns. And new research communities are forming around questions of AI ’safety’ and ‘alignment’. These particular (and, in my view, problematic) modes of responsibility are attached to a view of the technology and its teleology that is already overdue for an STS demolition. In my contribution, I will draw on research into public and expert attitudes, conducted during 2024 via surveys and as part of a BBC documentary on AI and existential risk, and reflect on my role as a proponent, analyst and actor in debates about ‘Responsible AI’.