Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenors:
-
Marcus Burkhardt
(University of Siegen)
Karin Knorr Cetina (University of Chicago)
Send message to Convenors
- Chair:
-
Hendrik Bender
(University of Siegen)
- Format:
- Traditional Open Panel
- Location:
- Theater 5, NU building
- Sessions:
- Thursday 18 July, -, -
Time zone: Europe/Amsterdam
Short Abstract:
Recent advances in machine learning gave rise to new forms of agentic technologies that reshape sociocultural dynamics. We invite contributions that explore the transformations of agency on multiple levels, e.g. human-machine interaction, social-synthetic structures, and alternative AI designs.
Long Abstract:
Recent advances in machine and deep learning led to the proliferation of digital entities that elude established conceptualizations and categorizations of technologies as mere tools, instruments, resources, or mediators. As technologies that are capable of acting semi-autonomously in changing environments, synthetic agents have found their way into a wide variety of fields of practice, and they are exceedingly diverse in form and appearance: It ranges from algorithms in high-frequency trading to intelligent personal assistants, customer relationship chatbots, image and text generating AI models to semi-autonomous cars, drones, robots, smart factories, and autonomous laboratories.
The spread of agentic technologies is both an ongoing process and driver of social and cultural transformations that manifest themselves on different levels. It is also a process that affects the scope and roles of human agency. The panel invites contributions that specifically examine the dynamics and consequences of these transformations. The questions we propose can be addressed through a wide variety of approaches including case studies and quantitative computational and historical studies. They include but are not limited to the following levels and dimensions:
- The transformation of agency is an ongoing process and accomplishment. We invite studies that illustrate this accomplishment and the mechanisms manifest in it.
- What distributions of agency emerge between human and synthetic agents, and which models of human-machine interaction and modes of participation appear implemented in them?
- When whole settings take on agentic features (an example is smart factories), what social-synthetic structures and rules emerge in these processes?
- What negotiations, resistances and subversions mark transformations of agency, with what legal, organizational, and personal consequences? What critical engagements can STS offer to this?
- We also invite investigations that engage with the design of alternatives to current modes of AI by large tech companies.
Accepted papers:
Session 1 Thursday 18 July, 2024, -Short abstract:
Drawing on literature in cybernetics, human-computer interaction and human-autonomy teaming, the paper aims to re-examine historical concepts of human-machine cooperation since the 1960s and discuss them based on their socio-technical assumptions, role expectations, and autonomy concepts.
Long abstract:
Since the 1960s, numerous academic, industrial and military research projects have gone beyond the conceptualisation of computer based systems as tools for extending human capabilities or controlling mechanical operations and have programmatically conceived and designed them as cooperative partners that actively participate in work processes. Computers were envisioned to support human work by taking over certain clerical tasks or even being directly involved in decision-making processes together with human operators.
Although ideas conceived in the 1960s such as “man-computer symbiosis“ or “man-computer partnership“ (Licklider 1960; 1964) seemed futuristic at the time they however still have an impact on the development of agentic media today and resonate with contemporary research approaches like human-autonomy teaming (cf. Lyons et al. 2001). In light of recent developments in the fields of generative machine learning, artificial intelligence and interface design, as well as the proliferation of sensor technologies, many futuristic visions of the 1960s and 70s appear to be technically feasible today. In context of these developments the paper aims to re-examine the historical concepts of human-machine cooperation and evaluate them based on their socio-technical assumptions, role expectations, and autonomy concepts.
Drawing on relevant literature and debates in cybernetics, human-computer interaction and human-autonomy teaming, the paper traces the media historical development of agentic media as co-operative other(s) since the 1960s. In doing so the paper aims to unpack the developments underpinning today’s proliferation of synthetic agents and discusses how seemingly historical concepts can contribute to the understanding of contemporary technological developments.
Short abstract:
We compare the communicative aspects and forms of agency of LLM-based systems for text generation with those of traditional search engines. We argue that with search engines, the communication partners are still human beings, but for generative A.I., the communicative partner is the machine.
Long abstract:
We compare the communicative aspects and forms of agency of LLM-based systems for text generation with those of traditional search engines. Systems such as Google search practically achieve the automation of an extensive and dynamic catalog. Such a system allows users to enter into communication with authors (who may be also unknown, anonymous or multiple) of texts that can be accessed by following links provided in response to their queries. The communication partners are still human beings. LLM-based systems such as GPT3.5 instead, use the texts produced by human beings in the training data and in the interaction to autonomously generate content that may have never been written before and read by a human. In this unprecedented form of artificial communication, the communicative partner is directly the machine and the agency of such systems can be understood as an interactional achievement .
The new, A.I.-powered search engine Perplexity combines the features of both search engine and generative A.I. (more generally described as “retrieval augmented generation”), operating as a partner that enables communication both through others and with others. Through empirical experiments, we compare the performance of Perplexity with that of GPT3.5, focusing on the interaction with the user in response to factual questions, the use and indication of sources, and the handling of hallucinations or lack of knowledge.
Our hypothesis is that the communicative agency of the digital system underlies Perplexity's performance, enabling it to achieve results that can exceed those of systems much larger in size and computational capacity.
Short abstract:
In this paper I attempt to make sense of the manifold ways agency is modulated in novel artificial intelligence architectures and machine learning practices, paying particular attention to the computational paradigms that aim to simulate (aspects of) the world.
Long abstract:
This contribution elaborates on recent attempts to advance the project of artificial (general) intelligence based on my reading of emerging research in the field from the perspectives of media and cultural studies. I will focus on the process of worlding in machine learning and the blurring of boundaries between the learning subject and its environment for the purpose of constructing agency. I will discuss several technical texts that outline novel cognitive architectures and virtual environments for ML research, converge on their epistemic rendering of what the “world” is.
My hypothesis is that the project of generalist AI can be understood relationally through the careful comparison of the “world model” notion and what I refer to as “model worlds”. Model worlds are game-like simulations assembled for the purpose of ML research, expanding on Bruder’s (2021) writing on the use of microprocessors as model organisms in neuroscience. World model is a paradigm in ML that refers to AI agent’s capacity to represent the world’s dynamics (Ha and Schmidhuber 2018).
The computer science texts figure the “world” either in the sense of algorithmic knowledge domains or in relation to AI agent’s directive to grasp the world. Whether external or internal to agents, such simulations of the world allow for predictive control and, therefore, the governance of potentialities, including the drive to learn. Through the dialectics of “world” in this research corpus I aim to demonstrate how synthetic agency is engineered, negotiated, and reconfigured.
Short abstract:
AI systems such as ChatGPT, Gemini and Copilot are widely conceptualized as agents. As we witness the explosive multiplication of such agents the paper aims to map this dynamically evolving field in order to raise the question what differences between AI agents make a difference.
Long abstract:
Following the release of ChatGPT in November 2022, generative AI quickly moved to the center of technical imaginaries worldwide. Developments and news in this field are rapidly evolving in a competition among global technology companies to create increasingly powerful language models, integrate them into specific practical contexts, and explore ways to commodify them. Interactions with language models are conceived as dialogs with AI agents, which can be fine-tuned in various ways and operate semi-autonomously through different interfaces in digital environments. AI researchers working on large language models argue that “agenticness” (Shavit et al. 2023) is what’s specific and new about applications such as ChatGPT, Gemini, Copilot etc. With the introduction of GPTs and a dedicated GPT Store by OpenAI, we are currently experiencing an explosive proliferation of AI agents, raising the question of which differences among AI agents make a difference and how these differences manifest in media practices. Against this backdrop, the paper attempts to map the dynamically evolving field of AI agents. Opposing the rhetoric of the radically new, it first recalls the recurring idea of software agents in the history of AI. Secondly, drawing on approaches from digital methods it explores experimental approaches of engaging with the multiplicity of AI agents and their agentic capacities in situ.
Short abstract:
Machine learning applications operate downstream of instruments, and are themselves instruments: technical apparatuses which serve to sense and present. This paper explores ML as concatenations of instrumentation: cascades of sensing, presenting, and then sensing and presenting again.
Long abstract:
Machine learning is itself an instrument - a technical apparatus which can serve to sense and present, and make available to interaction that which could not be engaged before. But machine learning applications themselves operate downstream of instruments, reliant on the long pathways of sensors that generate data, and which in turn configure ML and feed its algorithmic analytics. This paper thus explores ML as concatenations of instrumentation, or recurrent cascades of sensing, presenting, and then sensing and presenting again.
Instruments (and not only algorithms and data), deserve our attention. Instruments differentiate and materialize the world according to their design, though not so much as to be ‘determined’. By definition an instrument must be capable of surprise, of revealing something other than what was expected or even hoped, though not so much so as to produce incoherence, a finding that cannot be placed and understood.
We explore the matter through the case of energy generating windmills. These windmills consist of an aggregation of components (eg. gearboxes, transmission, turbines), each of which are themselves replete with data generating sensors, and in turn these data are fed into machine learning models (eg. for predicting maintenance needs or energy output). In sum, sensors layered on sensors, which at each stage are themselves agentic, that is, consequential in how they intermediate phenomena, presentation and interaction.
Short abstract:
Complex organizations like laboratories play a huge role in transformation theories as tools for human coordination that allowed modernity to take hold. But what happens when labs become autonomous and the problem of coordination disappears? The paper analyzes developments in lab-autonomization.
Long abstract:
The complex organization plays a huge role in transformation theories that explain the transition from a premodern to the modern age, it is in a sense the tool for the coordination of human groups that allowed industrialization and modernity to take hold. Max Weber pointed to hierarchy and expert knowledge as two mechanisms that explain the effectiveness of complex organization. Laboratories, both big and small, are organizational units, but neither hierarchy nor human individual intelligence best characterize Labs, as STS has shown—hierarchy is often played down, non-existent or not possible, intelligence appears distributed, and object relations—rather than personal relations—dominate in human labs. This paper takes this assessment as a starting point for investigating a further transformation, that from human to human-free laboratories in advanced autonomization efforts in which the problem of human coordination empties out and is replaced by the goal of creating the self-driving artificially intelligent agentic setting for science.
This paper explores current autonomization efforts with a focus on disciplines in which such efforts are most advanced, such as chemistry and biology labs.
Short abstract:
ML tools automate teaching tasks, challenging teachers' expertise, autonomy, and accountability. This paper explores these (contested) shifts in (professional) agency using ethnographic examples from Swiss secondary schools.
Long abstract:
As in other fields, machine learning (ML) tools are promising a technological solution to educational problems. Tasks that are considered central to teaching are increasingly automated by ML tools: Intelligent tutoring systems provide feedback and select appropriate exercises for individual students (task selection), automated grading evaluates essays and open-text exercises (assessment), and learning analytics promise to predict student performance based on their past actions tracked on learning platforms (diagnostics). This not only raises questions about the professional identity of teachers, but also disrupts the established distribution of agency between humans and machines in education. The proposed paper addresses this (contested) shift in the distribution of agency using empirical examples from an ethnographic project on the use of ML tools by teachers in Swiss secondary schools. Three dimensions of agency are identified that are particularly affected by ML tools: (1) expertise: in the face of (seemingly) objective tools, teachers are expected to justify their pedagogical decisions when they deviate from automated systems; (2) autonomy: consequently, teaching often involves defending the autonomy of the profession against the claims of companies promising better results; (3) accountability: Finally, the advent of ML tools in education also obscures questions of accountability by distributing it across a range of actors (teachers, administrators, developers, systems...) - who should one turn to when grades are perceived as unfair or learning outcomes as lacking? Conceptually, the paper draws on STS research on ML, mediated accountability, distributed agency, and the sociology of professions.
Short abstract:
This paper is based on an ethnography of the development and use of a jazz-improvising digital interactive system in an Institute of Technology in the US. It shows that in the course of such development and use, human creative agency and machine creative agency co-constitute in significant ways.
Long abstract:
In different forms of art, a growing number of artists are taking advantage of generative AI technologies to transform their creative practices, delegating different degrees of agentive control and artistic decision-making to those technologies in the hopes of finding inspiration in their output and thereby expanding their own creative horizons. This paper focuses ethnographically on David, a computer scientist who is also a semiprofessional jazz trumpet player, and on his development and use of a jazz-improvising digital interactive system in an Institute of Technology in the US. It explores the programming decisions made by David in his attempt to transform the system’s creative agency in specific ways, and how those decisions resulted in equally consequential transformations in David’s own creative agency as a result of his joint improvisations with this system. In doing so, the paper points to the co-constitution of human creative agency and machine creative agency in the age of generative AI.