Trust and machine learning
Hendrik Heuer (University of Bremen )
Andreas Breiter (University of Bremen)
Paper short abstract:
We discuss the challenges that arise when people interact with machine learning systems. Considering the complexity and indeterminacy of such systems, we argue that it is impossible for people to consciously reflect on machine learning. We further argue that trust helps to overcome these challenges.
Paper long abstract:
If a pen runs dry or an internet connection dies, people experience a breakdown in the use of their tools. Such breakdowns cause a shift of focus that reminds people of the discrepancy between their actions or expectations and the world (Winograd and Flores, 1986). In Heideggerian terms, the pen or the internet connection changes from "ready-to-hand" to "present-at-hand". When something becomes "present-at-hand", people consciously reflect on it. We want to understand what it means for a machine learning system to be "present-at-hand". Machine learning systems pose a challenging problem: Not only are interactions with them necessarily mediated, machine learning systems like neural networks are inherently complex and it is impossible to evaluate them comprehensively. A French-English translation system translates infinitely different French sentences into infinitely different English sentences. Assessing the quality of such a system and consciously reflecting on it is impossible for a user since it requires understanding a machine learning system's inner logic and testing every input-output combination. Despite this indeterminacy, people successfully interact with machine learning systems all the time. We believe that trust is what makes this possible. Trust enables people to face the complexity of organizations, other people, or abstract things like money and political power. We argue that trust also enables people to interact with machine learning systems since trust allows people to face uncertainty, manage complexity, and take risks.
The power of correlation and the promises of auto-management. On the epistemological and societal dimension of data-based algorithms