Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.

Accepted Paper:

Alignment of what? Introducing, framing and “solving” the problem through conceptual engineering   
Daniel López-Castro (Instituto de Filosofía, CSIC)

Paper short abstract:

In this talk, I will systematically analyse the discourses that have taken place in the public forums of the Effective Altruism philosophical and social movement and its surrounding ecosystem (Weiss-Blatt 2023). I will also analyse its influence on the popularization of the value alignment problem.

Paper long abstract:

Nowadays and because of Stuart Russell (2019), the value alignment problem is mainly interpreted as a technical problem in the area of ML. That said, its more theoretical conception can be traced back to the reflections of Norbert Wiener (1960) and, in its most popular version, to the ideas presented by Nick Bostrom (2014) in his popular work “Superintelligence: Paths, Dangers, Strategies”. This approach to the issue, based on the orthogonality thesis and the idea of singularity—and thus superintelligence—has played a key role in forming the narrative of the existential risks of AI (Center for AI Safety 2023). However, presumably, its popularization in Silicon Valley circles, the AI scientist community and the media would not have been possible without the important conceptual activist role played by Effective Altruism. This philosophical and social movement, utilitarian in spirit, has even been accused of playing a key role in the failed dismissal of Sam Altman, CEO of OpenAI (Broughel 2023). In this paper, I will systematically analyse the discourses and arguments that have taken place in the public forums of this community and its surrounding ecosystem (Weiss-Blatt 2023), as well as their influence on the idea of value alignment.

Panel P287
Beyond value alignment: invoking, negotiating and implementing values in algorithmic systems
  Session 1 Tuesday 16 July, 2024, -