to star items.

Accepted Paper

Re-shaping the work of decision-making: AI decision support and the role of case officers  
Paola Lopez (University of Bremen) Rainer Rehak (Weizenbaum Institute for the Networked Society)

Send message to Authors

Paper short abstract

We explore the ways in which working processes of case officers are reshaped by the introduction of AI decision support tools. We lay out several rationales the case officers have to adhere to, and we argue that these competing rationales create a very narrow position for case officers to inhabit.

Paper long abstract

In this paper, we discuss the role of case officers in contexts of AI decision support. In sensitive contexts, e.g., to allocate welfare resources, fully AI-based decisions are usually prohibited: Due to the General Data Protection Regulation in the EU, a decision based on sensitive personal data of humans cannot be fully automated, and due to the new EU AI Act, there has to be a meaningful human oversight. This creates challenges for those within institutions who are tasked with making the “final decision”, i.e., the “humans in the loop”.

We explore the ways in which working processes of case officers are reshaped by the introduction of AI decision support tools. We lay out several, often competing, strands of rationales the case officers have to adhere to: There is the efficiency rationale—as data-driven tools are supposed to support case officers with their case load—and its accompanying expectations of increase in case numbers; there are legal regulations that require meaningful agency in all decisions, as well as the need for individual “AI literacy” in order to even understand how an AI-proposed decision came to be; there are organizational rules in how deviating from, or adhering to, an AI-proposed decision has to be justified by the case officer. Drawing from case studies, we argue that these competing rationales create a very narrow and, often, almost impossible position for case officers to inhabit.

Traditional Open Panel P217
‘Nothing comes without its world’: Futuring work with/through/against AI epistemologies
  Session 1