Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Learning in the wild: on the problem of adaptivity in machine learning  
Nikolaus Pöchhacker (Technical University of Munich) Marcus Burkhardt (University of Siegen)

Paper short abstract:

The promise of machine learning applications is to be able to adapt to unforeseen futures without being explicitly programmed. This proclaimed adaptivity is, however, not an automatism. We therefore ask how adaptivity is accomplished in machine learning on different levels and to varying extents.

Paper long abstract:

In June 2017 Sundar Pichai, CEO of Google, proposed a paradigm shift in the history of computing: Innovation should neither be driven by approaching problems as first and foremost digital nor mobile, but instead by taking an AI first approach that is fueled by recent advances in the field of machine learning. This statement reflects a central promise of machine learning applications, namely the ability to adapt to unforeseen futures without being explicitly programmed: visual recognition of objects or persons without ever having seen or trained on this specific object or this specific person before, self-driving cars being able to deal with new situations safely or chatbots conducting conversations with humans in an engaging manner.

Conversely, the more such technologies are built into the fabric of everyday life the more concerns are raised about their potential risks, e.g. biases and inequalities inherent in training data sets. As a result, ML models often produce (social) structures instead of adapting to them. This tension between promises of ML and perceived risks points toward a hitherto largely unstudied aspect of data-driven applications: the production of adaptivity in real-world ML applications. Drawing on examples like Microsoft's chatbot Tay.ai, recommender systems and fraud detection applications the paper aims to unpack the notions of adaptivity that ML rests upon. By focusing on how adaptivity is accomplished on different levels and to varying extents our goal is to explore the ontological politics that ML systems enact in the wild of their real-world deployment.

Panel A27
The power of correlation and the promises of auto-management. On the epistemological and societal dimension of data-based algorithms
  Session 1