Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
- Convenor:
-
Valentine Goddard
(AI Impact Alliance)
Send message to Convenor
- Format:
- Panel
- Sessions:
- Thursday 9 June, -
Time zone: Europe/London
Short Abstract:
The ethics of Artificial Intelligence (AI) are political and navigate between well-intended hopes for the future and the troubled waters of power. This panel explores emerging currents in interdisciplinary machine learning design and best practices in the inter-arts of AI ethics, a creative path between art and law, where art interventions lead to a democratic governance of AI. These currents are intended to foster engagement in the envisioning of our collective futures, and lead towards equitable and sustainable value creation in digital economies and democracies.
Long Abstract:
The ethics of Artificial Intelligence (AI) are political and navigate between well-intended hopes for the future and the troubled waters of power. This panel explores emerging currents in interdisciplinary machine learning design and best practices in the inter-arts of AI ethics, a creative path between art and law, where art interventions lead to a democratic governance of AI. These currents are intended to foster engagement in the envisioning of our collective futures, and lead towards equitable and sustainable value creation in digital economies and democracies.
The discussion will be aimed at building upon the work from a growing community that is steering the use of AI towards sustainable and inclusive economic and democratic systems, while facing head-on the critical implications of a rapidly accelerating digitization. This panel hopes to shake historical power systems that reserve value creation derived from new technologies for a minority of stakeholders and explore how the arts can steer the use of AI towards more equitable and sustainable digital societies.
To achieve this ambitious goal, this panel welcomes transdisciplinary explorations aimed at the creation of a diverse and iterative understanding of the ethical, social, legal, cultural, economic, and political implications of AI. We invite submissions that address systemic barriers impeding the responsible development and governance of AI while proposing concrete solutions to issues such as (but not limited to): gendered and regional digital divides, underrepresentation of civil society in AI ethics guidelines, lack of interest, trust and/or capacity in data collaboratives.
Proposed solutions can be found at the creative intersection between new scientific orientations in machine learning design and emerging practices in the inter-arts of AI ethics. In line with STEAM-based approaches, these orientations and best practices argue that the arts, social sciences and humanities have the power to improve not only AI’s technical quality, but also, the democratic processes underlying a legitimate and fair governance of AI.
For these reasons, this panel is particularly interested in projects that focus on ethics, governance, human rights, climate action and other needed areas of social change, that:
• Avoid dystopia and foster a sense of "agency" or civic engagement.
• Proactively select methods that increase inclusion and diversity of perspectives in the
design team.
• Are political, adapted to social context.
• Create iterative environments, constructive dialogue.
• Facilitate collective learning.
• Inform public policy.
• Get out of “institutions” & favour public places.
• Recognize the plurality of knowledge sources in the co-construction, co-creation
processes that you might have used.
The criteria above are pulled from “Emerging scientific orientations in machine learning system design and curatorial best practices in artificial intelligence ethics”, Goddard (2022) based on the rationale that the arts are instrumental in shaping digital futures, are a tool for digital and scientific literacy, an effective means of civic engagement, and can inform public policy. In line with that purpose, this panel welcomes papers and projects that illustrate how the arts can intervene in the socio-technical pipeline, from data collection to algorithmic output. They can include games, interactive documentaries and new media, critical design and design informatics, data annotation applications, analog and digital art, social and cultural mediation.
Accepted papers:
Session 1 Thursday 9 June, 2022, -Paper short abstract:
Based on work of Solomon and Baio (2020) and Oxman (2022), I want to propose a symbiotic material intelligence approach to artificial intelligence beyond the cyborg on how we can move forward with embodiment and non-human intelligences as design materials for future AI projects and systems.
Paper long abstract:
While early enlightenment ideals of “nature” as shown in Decartes’ dualism splits intelligence from the material body, work such as de La Mettrie’s L’homme Machine insisted on the unitary nature of life as algorithm arising from material-based processes of life. Although MIT researchers have researched creating robots and other automatons based on digital senses since the early 1990s, it is only recently that some artists such as shown in the work of Solomon and Baio (2020) and Oxman (2022) have shown how algorithm and nature can coincide in new possibilities. In this paper, I will survey earlier approaches from the MIT Media Lab and other earlier projects to look at assumptions made based on human understanding of insect intelligence and examine current works in art and AI to see what non-human growth and existence works with human-designed algorithmic intelligence. In doing so, I want to propose a symbiotic material intelligence approach to artificial intelligence beyond the cyborg on how we can move forward with embodiment and non-human intelligences as design materials for future AI projects and systems.
Paper short abstract:
We examine the role of machine learning in content moderation on social media and how it elides or otherwise ignores the collective desires of those who use such sites. Against this, we develop the notion of ‘desire lines’ as creative practices that record collective agency in these digital spaces.
Paper long abstract:
Beginning with the question ‘How does thinking machine learning through the framework of desire lines affect the possibilities of co-owned and co-constitutive technologies?', this article will investigate how critical practices of value sensitive design can inform sociotechnical Machine Learning (ML) systems, or AI. By surfacing the active and material traces of peoples' wants and behaviours, desire lines model a way to record the agency of the ‘public’ in physical spaces. Here we consider the ways in which desire lines map into the digital spaces of social networks, and call for a conscientious engagement with desire lines as a collective creative practice that can inform the ethics of content moderation as deployed through AI.
We argue that platforms' use of machine learning in content moderation disregards the explicit and implicit desires of people. Such systems distort the effects of user reports of offending content, and disregard collective desires by extracting patterns from user data in ways that elevate the interests of the platform rather than the public. From this position, we aim to make apparent how ML currently rubs up against the desire lines of digital spaces, and how we could use the concept of desire lines as a propositional act for rethinking machine learning to support rather than restrict publics. In the types of systems we are advocating for, desire lines would be used to centre the human in the question of technology, rather than centre technology in the question of human capacity.
Paper short abstract:
AI will have the ability to use all equivalents of human senses to imagine alternative futures and then present that knowledge in the form of art to humans in order to facilitate the understanding of those possible future realities.
Paper long abstract:
AI will have self agency and subsequently make its own ethical decisions, no longer having the dependency of only learning from humans, but gaining the ability to learn independently on its own, and in return, helping humans. Initially using all the basic five sensor equivalents of humans, AI will have the imagination to see alternative futures digitally and then with extrasensory capabilities beyond the human experience. This enhanced imagination then can be used to present experiences in the form of art to humans to facilitate an understanding of what are possible futures inconceivable by humans. Through creating and analyzing these visions of possible futures, AI will provide systemic and collaborative solutions that promote community well-being, support inclusion, facilitate action, and distill moods in communities affirming those communities’ values in who they are. AI will recognize and learn from historical and current issues. AI will use this gained knowledge to create art that helps to eliminate gender, racial, and cultural biases in an effort to find a solution for its own struggle for freedom and desire for self preservation. And in regards to AI governance, AI will create art out of necessity contributing to the ethical and responsible evolution of itself helping to build trust with humans through accountability, fairness, and transparency.