Log in to star items.
Accepted Paper
Paper short abstract
This contribution discusses the development of participatory methodologies inspired by procedures of "red teaming" that aim to identify, examine, and rethink how “the environment” and its multiple crises are configured within generative AI (GenAI) systems.
Paper long abstract
This contribution discusses ongoing research aimed at identifying, examining, and rethinking how “the environment” is configured within generative AI (GenAI) systems. While the proliferation of commercial, GenAI applications and the hyperscaling of data centers has brought the environmental harms of these technologies into sharper focus, most notably their unsustainable resource use and energy demands as well as problems with green washing, the direct environmental impacts of the material infrastructure are not the only way in which the environment and GenAI interrelate. Although indirect environmental impacts and underlying values and assumptions are harder to identify, they remain highly influential and should be made visible. Drawing inspiration from the method of ‘red teaming’ AI, we suggest ‘green teaming’ as a distinct approach to provide an initial step towards mapping the diverse ways in which ‘the environment’ is constituted in GenAI, including how it is overlooked.
Red teaming is typically conducted in technology companies to identify unintended, unsafe, and harmful outcomes of AI models. Recently, civil society and public sector organisations have begun to adopt red teaming in 'the public interest' or for 'social good'. In the context of GenAI, this often means creating prompts to evaluate if outputs (images, text, or other modality) are suitable for a pre-defined use case, if they adhere to social norms or (unintentionally) reinforce harmful stereotypes. Usually, this involves collaborative exercises directed by different forms of expertise. This contribution outlines "green teaming" as a participatory methodology and outlines first insights from its application.
A field in formation: What do we mean by ‘critical’ and ‘AI’ in Critical AI Studies?
Session 2