Log in to star items.
Accepted Paper
Paper short abstract
This paper examines the teaching methodology of Genre Diffusion, an image-making course that uses genres from popular media as a framework for understanding the internal mechanics of AI diffusion models.
Paper long abstract
Genre Diffusion welcomes the students to learn how the AI diffusion tools function and how they can be incorporated into an existing workflow. It can involve collaging, handrawing or digital tools like Photoshop, and crucially involves traditional methods of composition-making.
Genre is used as an analogy for the inner workings of AI, because when the model is trained it goes through a similar process of categorisations as historically humans have done with genres. The course aims to teach the students to use AI as an analytical tool, to look closely at the artefacts that the AI produces, when not prompted and how that reflects a popular tendency or bias. The genre later becomes a backbone of their composition, helping to make decisions on visual style, narrative, layout etc.
The final outcome of the course is to create a high-resolution A2 composition, which is composed of small AI images that had been edited, ‘hacked’ by students in Photoshop.
Genre Diffusion works with the limitation of the current diffusion models available for free, which operate most reliably in 1024x1024 pixels. That is the maximum size used in training. And therefore, when working in high resolution large format, the act of arranging those images on the blank page becomes an exercise in design agency and visual storytelling.
The paper would further expand on the genre-latent space analogy and methodology of dataset analysis and image-making, emphasising the importance of the introduction of slow, critical practices into rapid AI image production.
The digital pantheon: Engineering deities and demons
Session 2