to star items.

Accepted Paper

Making Sense of Algorithmic Disruption: Practical Reasoning with AI in Digital Advertising  
Natalia Chrobak (AGH University of Science and Technology in Krakow)

Paper short abstract

This paper examines how users of AI-driven advertising systems engage in practical reasoning to manage opaque, unstable, and unpredictable algorithmic outputs. Drawing on ethnographic research, it shows how accountability and trust are enacted without transparency in everyday human–AI interaction.

Paper long abstract

AI-driven advertising platforms are widely promoted as autonomous, efficient, and self-optimizing systems. In everyday use, however, these systems often generate unstable, ambiguous, or unexpected outcomes that require continuous human interpretation and intervention. This paper examines how users of AI systems engage in practical reasoning to manage such persistent algorithmic disruption in real-world settings.

Drawing on 27 in-depth interviews, ethnographic immersion, and a netnographic analysis of over 1,000 professional forum discussions (July 2024–July 2025), the study focuses on Pay-Per-Click (PPC) specialists working with AI-based tools such as Smart Bidding and Performance Max. Rather than exercising direct control over algorithmic processes, practitioners develop situated forms of reasoning that allow them to assess, interpret, and respond to fluctuating system outputs. These practices include shared heuristics, collective sensemaking, and affective attunement to campaign instability—often described as “feeling” when something is wrong before anomalies become visible in performance metrics.

I argue that algorithmic disruption is not a temporary condition but a durable feature of AI-mediated work that reorganizes how knowledge, trust, and responsibility are practically accomplished. I introduce the concept of algorithmic responsibility to capture how accountability is enacted through everyday interpretative practices in the absence of transparency or explainability. By foregrounding how AI is reasoned with in practice, the paper contributes to STS debates on human–AI collaboration, the limits of automation, and the situated competencies required to make ostensibly autonomous systems function in everyday life.

Traditional Open Panel P136
Outlasting 'disruption': Empirical perspectives on practical reasoning with AI
  Session 1