to star items.

Accepted Paper

(Re)configuring AI Methods: Ambiguity, Adversarialism, and Contrarian Thinking.  
Layla Baron (Vrije University Brussels and European Commission, Joint Research Centre) Maria Eriksson (European Commission, Joint Research Centre Lund University)

Send message to Authors

Paper short abstract

This paper challenges standardized AI evaluations by combining STS/Critical AI Studies methodologies with methods from red teaming and adversarial testing to illustrate how contrarian thinking, hands-on tinkering, and speculative reasoning can (re)configure AI-oriented methodological agendas.

Paper long abstract

In the field of AI research and development, the use of quantitative AI benchmarks constitutes a key means through which the capabilities and risks of AI models and systems are evaluated and “known”. Quantitative AI benchmarks come in many shapes and forms, ranging from multiple-choice questionnaires to frameworks for assessing the free-text outputs of AI models, and evaluations more akin to psychological intelligence tests. As of late, however, the types of knowledges gained through AI benchmarking has been heavily questioned, leading scholars and to speak of an ongoing “evaluation crisis” in the field of AI. This paper sets out to problematize quantitative AI benchmarks and benchmarking practices, asking how humanities scholarship and qualitative methods can challenge what counts as “true” and “meaningful” insights about generative AI models and systems. In particular, it explores how methodologies developed in STS and critical AI studies – combined with methods borrowed from the field of red teaming and adversarial testing – can challenge quantitative, standardized, and metrics-driven forms of AI knowledge production. It also reflects on how red teaming and adversarial testing – as first formalized by the U.S. Military during the Cold War and later developed in fields like Cybersecurity – can foster inventive methodological interventions that welcome ambiguity and multiplicities of meaning while problematizing epistemological assumptions about AI. Doing so, it illustrates how adversarial tactics, contrarian thinking, hands-on tinkering, and speculative reasoning can provide fruitful ground for (re)configuring AI-oriented methodological agendas.

Traditional Open Panel P043
The matter of method in researching AI: elusiveness, scale, opacity
  Session 3