Log in to star items and build your individual schedule.
Accepted Paper
Short abstract
The rise of generative AI has transformed different areas of communication. However, the concerns about different forms of social bias in AI systems stress the importance of its continuous investigation, in particular regarding their representation of violence and war.
Long abstract
The rise of generative forms of artificial intelligence (AI) transforms different areas of communication. The ease of producing content using text (e.g. chatGPT) and image-focused AIs (e.g. Midjourney) enables new possibilities for individuals to get informed about and represent a broad range of societal phenomena, including historical and recent instances of mass violence. However, the concerns about different forms of social bias in AI systems in the context of modern wars stress the importance of its continuous investigation. For this aim, we pose the following research questions which we strive to address in this paper: Are there evidence of specific forms of social bias in how image-generative AI models depict contemporary wars? What aspects of war representation does this bias primarily affect? And how does it relate to the relations of power influencing war mediatisation? To address these questions, we conduct AI audits of two popular image-focused AI models— Midjourney and Kandinsky—regarding their representation of the ongoing Russian war in Ukraine.
Witnessing disasters, crises and wars in the age of datafication
Session 2 Wednesday 17 July, 2024, -