Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
Accepted Paper:
An investigation of Bias and Prejudice Embedded in Auto-writing AI
Yuwei Lin
(University of Roehampton)
Paper short abstract:
Algorithms can now automatically generate data-driven narratives, but these so-called Natural Language Generation (NLG) tools are not neutral. Based on autoethnography, this paper will discuss socio-technical issues linked with GPT3 when the tool is used to generate politically sensitive narratives.
Paper long abstract:
Algorithms can now automatically generate data-driven narratives, but these so-called Natural Language Generation (NLG) tools are not neutral. Based on autoethnography, this paper will discuss socio-technical issues linked with GPT3 when the tool is used to generate politically sensitive narratives. Given any text prompt like a phrase or a sentence, GPT-3 returns a text completion in natural language. Recently, this tool (and an earlier version of it GPT2) has been used to write news articles or fictions. However, we know very little about the tool's capacity (what types of texts the tool will return) and why. This paper uses autoethnography to investigate what and how bias and prejudice and norms are embedded in the GPT3 tool by showing what is deemed as 'politically sensitive' or 'harmful' content by the tool. This method will transparentize the blackboxed AI algorithms, and shed lights on how using NLG AI to write (aka 'auto-writing' or 'robo-writing') may shape the types of narratives generated.