Accepted Paper
Paper short abstract
Ethnography of a Swiss robotics company showing how engineers use LLM tools to decide when robots and code are “good enough.” I trace how these adequacy regimes reshape autonomy, productivity, care, and hierarchies of expertise.
Paper long abstract
This paper draws on ethnographic fieldwork at a Swiss robotics company, conducted as part of my current project on “adequacy regimes” in generative-AI supported work. Building on my earlier research on “good enoughness” in software engineering (Bialski 2024), I ask how robotics engineers decide when a robot, a line of code, or a workaround is “good enough” to move on.
In this setting, hybrid work is the norm: engineers alternate between hands-on lab work with physical robots and remote coordination through digital platforms (issue trackers, code review tools, chat channels) and AI systems (LLM copilots, simulation tools). Across these sites, they continually negotiate what counts as “good work.” Autonomy no longer means simply exercising individual judgment, but knowing when to defer to automated test suites, safety protocols, or AI-generated suggestions. Productivity is framed in terms of shipping “good enough” fixes under time pressure, while still upholding care for users, colleagues, and the robots that must operate safely in human environments.
By following debugging sessions, code reviews, and safety discussions, I show how platform metrics, AI outputs, and informal peer evaluations together reshape hierarchies of expertise, competence, and deservingness. The paper thus contributes to the panel’s aim by demonstrating how infrastructures of automation and connectivity reconfigure not only how robotics work is organized, but how workers imagine their productive and moral selves in relation to “good enough” machines and emerging adequacy regimes.
Redefining "good work" in the age of platform, AI, and digitally mediated labour.
Session 1