Log in to star items.
Accepted Paper
Paper short abstract
We propose to investigate how models take shape through what we call “learning work”: the modelling practices through which AI systems are continuously updated to keep pace with shifting sociotechnical environments.
Paper long abstract
Ethnographic research into developers’ modelling practices constitutes an emergent field of investigation. Methodological strategies to foreground this often-hidden labor become necessary to unveil how models take shape. To address this need, we suggest considering two moments as methodological entry points. First, transfer learning, that is, routine adaptation of general-purpose pre-trained models to specific requirements. By examining what developers (out of necessity) choose not to modify, alongside the minute technical features they do alter during such model tailoring, an infrastructural map of how things are tied together can be drawn. Second, the detection of shortcut learning, i.e., developers’ struggle to discern when a model is diverging from its expected path and to identify the specific patterns it has prioritized to avoid embedding these diversions in subsequent upgrades. As developers catch the model’s spurious correlations, the underlying rules of both the model and the working team’s decision-making become visible. These two methodological entry points are instances of what we propose to call “learning work”: the updating of AI systems in the face of ever-changing, present-tense contingencies. Legacy patterns continuously turn out to be stale, thus learning practices become essential to keep these systems alive, meanwhile revealing the disciplinary stakes of what is apprehended and what is excluded from the system’s reality.
The matter of method in researching AI: elusiveness, scale, opacity
Session 3