Click the star to add/remove an item to/from your individual schedule.
You need to be logged in to avail of this functionality.
Log in
Accepted Paper:
Paper short abstract:
in this paper I argue that instead of dealing with the regulation of synthetically produced content, regulators must regulate the knowledge and trust authority of LLMs and their providers.
Paper long abstract:
Absurd hopes, analogies and expectations have been accompanying large-language models (LLMs) and their performances since 2021. Tech firms and their leaders strategically hyped (e.g. see Gates, 2023, or Future of Life Institute, 2023) - and criti- hyped (Vinsel, 2021) - chatbots like Chat-GPT or Bard.
Current policy approaches like the AI-Act, DSA or the US Executive Order rotate around proposals like labelling, pre-deployment red-teaming, and other security measures. However, in this paper I argue that instead of dealing with the regulation of synthetically produced content, regulators must regulate the knowledge and trust authority of LLMs and their providers. When television shows broadcast entertaining fiction, it is not a problem if viewers understand and treat it as what it is: fiction instead of reality. Similar counts for LLMs.
BigTech’s hyping of the LLM phenomenon powerfully informs us how speaking position and impression management in the public communication arena creates followership and influences trust in LLM’s synthetically created content (Bareis et al, 2023).
Policy makers must tackle the authority and credibility of knowledge production instead of fighting a lost battle of fact-checking and auditing rapidly increasing synthetical content on the web. I will dive into several policy recommendations such as classifying high risk providers instead of high-risk content as proposed by EU AI-Act, lobby-control and transparency registers, or narrative-codes of conduct in announcing tech-innovation for both BigTech and Policy-makers. In doing so this paper will demonstrate the power of trust and authority creation and its disregard in the current LLM policy debate.
Towards mapping and defining critical hype studies
Session 2 Wednesday 17 July, 2024, -