Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenor:
-
Kirk Besmer
(Gonzaga University)
Send message to Convenor
- Chair:
-
Kirk Besmer
(Gonzaga University)
- Discussants:
-
Robert Rosenberger
(Georgia Institute of Technology)
Galit Wellner (Holon Institute of Technology (HIT))
Lars Botin (Aalborg University)
Ryan Wittingslow (University of Groningen)
Ralf Cox (University of Groningen)
- Format:
- Closed Panel
- Location:
- Aurora, main building
- Sessions:
- Tuesday 16 July, -
Time zone: Europe/Amsterdam
Short Abstract:
Postphenomenology is a methodological approach that investigates how technologies mediate human experiences and practices. Each paper in this panel seeks to extend the conceptual resources of postphenomenology to address novel and emerging technological practices that are important to STS scholars.
Long Abstract:
Postphenomenoloy is a methodological approach that seeks to understand human-technology relations by analyzing the multiple ways that technologies mediate human experiences and practices. It has been involved with 4S for years, mainly because its analyses, concepts, and method overlap with and complement core STS theories. Although it is known for descriptively rich ‘case studies’ of technologies in use, postphenomenology also provides resources to conceptualize the ways in which novel technologies are being taken up in emerging practices. Such conceptualizations often involve extending – or transforming – postphenomenology itself in interesting ways. The four papers of this panel use a postphenomenological approach to conceptualize emerging human-technology practices. AI is transforming many areas of human activity; creative human work is no exception. Ryan Mitchell Wittingslow and Ralf Cox argue that rather than beginning from human-centered notions such as ‘intentionality’ to understand AI art and creativity, we ought to see AI as collaborators in art creation rather than replacements for artists. A central challenge with taking a risk-based approach to AI regulation is that AI development is faster than regulatory processes. New risks emerge even before regulations addressing previous risks are settled. Beginning from a postphenomenological perspective, Galit Wellner seeks to offer a methodology for mapping AI risks to be addressed by future regulation. Turning his attention to postphenomenology itself, Robert Rosenberger, argues that insights from feminist standpoint theory are indispensable to STS generally and postphenomenology in particular; he demonstrates this with examples from ‘hostile’ urban design. Finally, Lars Botin looks at the integration of the Arts in STEM programs at major US technical universities (Stanford, MIT, and Harvard). This new educational paradigm presents specific challenges to the traditional STEM paradigm of education. Botin argues that postphenomenology provides a privileged perspective to bridge differences between these two educational paradigms.
Accepted papers:
Session 1 Tuesday 16 July, 2024, -Paper short abstract:
AI is transforming our lives. What can regulators do to ensure this technology is making the best for the users and society at large? The EU AI Act refers to specific risks, thereby risking of facing new risks. My aim is to offer a methodology for mapping AI risks to be addressed by regulators.
Paper long abstract:
The rise of AI has led many regulators around the world to realize that they need to intervene and safeguard their citizens from the potential harms of this technology. The “poster child” of the regulatory efforts is the EU’s AI Act, with the first draft published in April 2021. The Act takes a risk-based approach, listing specific AI applications and ranking them according to their potential harm. Fast forward to June 2023, when the European Parliament officially approved the Act. It turned out that the two-year gap between the draft and the approval necessitated some updates, most notably to challenges associated with LLM-based systems such as ChatGPT. The changes to the 2021 draft consumed intensive negotiations which only ended in December 2023. The risk is that new risks may emerge, and the regulation may become outdated even before it enters into force. Moreover, risks typically tend to surprise us when they materialize (Jasanoff, 2016). The ability to anticipate as many risks as possible is particularly important when dealing with regulation, which involves slow processes whose results should endure many years. This is the challenge to be addressed by this article. The aim is to offer a methodology for mapping AI risks to be addressed by future regulation. Postphenomenology can help us to map the risks by the relations associated with them, combined with new ethics and politics of technology (Verbeek 2011, Rosenberger 2017, Verbeek 2020, Boenink and Kudina 2020).
Paper short abstract:
This paper deals how the new paradigm of STEAM is challenging and transforming classical STEM research and education. It is the claim that postphenomenology can be the mediator of transformation. Focus is upon how Arts through the lens of postphenomenology can contribute to the paradigm of STEAM.
Paper long abstract:
Science, technology, engineering, and mathematics (STEM) have had the overall attention within education and related areas for decades, because accordingly necessary to further and enhance innovation, development, and growth in Western societies. It is the assumption that this cyclopic focus on natural sciences and engineering has led to ‘one-dimensional’ initiatives and solutions. At major American universities (Stanford, MIT, Harvard) interdisciplinary approaches have manifested, where Arts have been inserted in the equation, resulting in the acronym STEAM, which is a powerful and strong ‘concept’ pointing at new forces and potentials that are released.
This paper will discuss the relevance of STEAM and at the same time frame how postphenomenology can bridge the gaps between components in the paradigm, because needed to make the paradigm ‘work’.
Postphenomenology has been concerned with science and technology ever since Don Ihde introduced this perspective in the field of philosophy of technology (1990), which makes postphenomenology an obvious ‘tool’ for bridging and combining different perspectives within the paradigm of STEAM. Don Ihde’s focus on the (historical) development of science and techology is highly inspirational on that matter. This paper will focus on how Arts through the lens of postphenomenology can contribute to the paradigm of STEAM.
Keywords: STEM, STEAM, Postphenomenology, Interdisciplinarity, Arts
Paper short abstract:
We argue for that it is plausible that AIs can and should be (re)conceptualised as collaborators in art creation. In this paper we attempt to formalise, in postphenomenological and psychological terms, how these human–technology relations should be best understood.
Paper long abstract:
The integration of artificial intelligence (AI) in the creation of artistic works raises profound questions regarding agency, authenticity, artistic quality, and labour, alongside deeper conceptual challenges concerning the definitions of terms such as ‘art’ and ‘artist’. In both scholarly discourse and public debates, there is no consensus on the creative aesthetic capabilities of computer-generated art in general, nor on the specific case of AI-generated artworks.
Critics, sceptical of the artmaking capacities of AI agents, primarily focus their attention upon human qualities like intentionality. The meaningfulness of artwork is traditionally guaranteed by the assumption that an artwork is the consequence of intentional artistic choices made by an artist with a lifeworld akin to ours. With AIs this guarantee is lost: they seem to lack the appropriate intentionality and embodiment necessary for creativity.
Developing insights first articulated in a preliminary manner by Coeckelbergh (2023), we argue for a more optimistic interpretation. Depending on the nature of the AI tool in question (there are non-trivial differences between the capacities of CNNs, GANs, and other image generation tools), it is plausible that AIs can and should—at least under certain circumstances—be reconceptualised as collaborators in art creation. In this paper we attempt to formalise, in postphenomenological and psychological terms, how these human–technology relations should be best understood.