Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality. Log in
This panel takes an empirically grounded perspective on recent attempts to align algorithmic systems with human values. Drawing on work from Valuation Studies, we are interested in how collective or shared values are mobilized and negotiated in relation to these systems, broadly conceived.
Recent work in machine learning under the heading of ‘value alignment’ seeks to align autonomous systems with ‘human values’ (Russell 2016). Some of this happens through the mathematical formalization of values like ‘fairness’, while approaches like Inverse Reinforcement Learning (IRL) seek to extract a reward function from human preferences or behaviors. Although they are discussed and operationalised in drastically different ways, values seem central to recent discussions of algorithmic systems.
How do these understandings of values, drawing from cognitive psychology and economics, correspond to anthropological (Graeber 2001) or sociological (Skeggs and Loveday 2012) theories or indeed empirical approaches like valuation studies (Helgesson and Muniesa 2013), which see values not as a driver of action but as an upshot of practices? How can individual-level data or preferences be reconciled with more complex collective, shared values? How can we agree on what values to prioritize or how to implement them in practice?
This panel takes an empirically grounded perspective on values in algorithmic systems, broadly conceived. We will explore how (collective) values are invoked, negotiated and used to settle disputes in this context and examine attempts to invest algorithmic systems with specific values. We invite contributions, including the following:
- Ethnographic and other studies of attempts to translate values into machine learning systems and Automated Decision Making (ADM) in different domains.
- Investigations of such machine learning systems being confronted with (for example, professional) value-laden practices on the ground.
- Empirical analyses of discourses or debates around values in value alignment, AI safety or Fair ML, including divergent interpretations of concepts such as ‘fairness’ and ‘bias.’
- Accounts of disagreements between academic disciplines or professional domains over the meaning of values.
- Critical and reflexive descriptions of interventions (Zuiderent-Jerak 2015) in this space, including attempts to measure or model values computationally.