Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality, and to see the links to virtual rooms.

A04


Misogynistic Algorithms 
Convenor:
Kathleen Richardson (De Montfort University)
Send message to Convenor
Format:
Plenary
Start time:
9 June, 2022 at
Time zone: Europe/London
Session slots:
1

Short Abstract:

In her pioneering ethnographic work among AI engineers and developers in the late 1980s and early ‘90s, Diana E. Forsythe explored – in terms that might today be considered moderate – the hidden cultural assumptions and gender relations in the field. “Both female bodies and experience,” she wrote, “tend to be treated as inherently anomalous in the male world of high technology”. This anomalous status renders women’s bodies as “hyper-visible” and their experience as invisible to their male colleagues, and, by extension, to the emerging AI systems.

Long Abstract:

Although there has been a great deal of discussion concerning the biases built into algorithms and other AI programmes in the intervening decades, recent headlines suggest little has changed. ‘Is AI sexist?’ asked a Foreign Policy article in 2017... ‘AI can be sexist and racist – it's time to make it fair’, said a 2018 paper in Nature... and, “Only 22 percent of professionals in AI and data science fields are women,” points out a 2021 paper in Stanford Social Innovation Review entitled ‘When good algorithms go sexist’.

It would be comforting to argue that this shows an increase in awareness, and that action is being, or at least will be, taken to build ‘gender-smart’ AI. But it is just as likely to be the case that things are getting worse. AI has advanced much faster than all the efforts to tackle its racist and sexist biases. It is now present in a vast range of everyday interactions, from setting credit scores and running traffic systems to perhaps less well-advertised applications in areas such as sex robots and lethal autonomous weapons systems. Moreover, despite the oft-stated intentions of tech companies, high-profile controversies such as the ousting of Timnit Gebru from her role at Google are cause for serious concern.

In this workshop, participants will be asked to confront the possibility that the “inherently anomalous” treatment of women in AI is indicative of misogynistic structures in its development and use. Steps to address the problem, such as feminist data practices, advocacy for gender-equitable AI and encouraging more women into AI careers, may offer hope – but are they enough? And if not, what are the likely consequences?

Accepted papers:

Session 1