Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Contribution:

What AI ethics assumes: a study about moral decisions and responsibility  
Gabriel Abend Patrick Schenk Vanessa Müller (University of Lucerne)

Short abstract:

Concepts of decision and choice are at the heart of AI ethics. Drawing on STS, cultural sociology, and moral philosophy, we cast doubt on this assumption. We empirically examine what counts as a decision or choice in the first place—and how it shapes attributions of moral responsibility to AIs.

Long abstract:

These days AI ethics is all the rage. And for good reason. Policy makers, regulators, pundits, philosophers, and laypeople are struggling with two sorts of questions. First, what’s morally right and wrong in this domain? To what extent is AI ethics unique? Second, autonomous AIs and self-learning algorithms challenge traditional views about responsibility. Wouldn’t true autonomy entail being morally and legally responsible?

Both in policy and scholarship, most accounts of AI ethics rely on concepts of decision and choice. A driverless car must decide whether to swerve to the left. What’s the morally appropriate decision? A firm uses AI to hire a new employee. As it turns out, the AI was more likely to choose white than minority candidates. Were its choices discriminatory and thus unethical?

Drawing on STS, cultural sociology, and moral philosophy, we cast doubt on AI ethics’ widespread and largely uncritical reliance on decision and choice. Our project investigates what counts as decision and choice in the first place. We argue that this varies in patterned ways across societies, groups, and individuals. Plus, it shapes who and what is deemed responsible.

To study these issues, we conducted a factorial survey experiment, coupled with an attitude survey. Respondents were a random sample of the Swiss population (n≈1900). They were asked to attribute decision-making capacity, moral responsibility, and trustworthiness to AIs and human agents in three scenarios: job recruitment, cancer diagnosis, and fact-checking in journalism. Crucially, we manipulated morally significant aspects of these situations, including the outcome, transparency, and discriminatory bias.

Combined Format Open Panel P164
STS & ethics: encounters on common ground
  Session 2 Thursday 18 July, 2024, -