Loading…
Thursday October 31, 2024 09:00 - 10:30 GMT
Session Chair: Monika Fratczak
 
Presentation 1
 
GOVERNING FROM BLACK TO WHITE: DISINFORMATION IN NUCLEAR EMERGENCIES
Seungtae Han, Brenden Kuerbis, Amulya Panakam
Georgia Institution of Technology, United States of America
 
Our research delves into the impact of disinformation in emergencies (DiE), specifically within the context of nuclear emergency responses, and the dynamics of the political economy behind it. Key questions guiding our investigation include suitable assessment techniques for DiE detection and evaluation and effective governance responses.
Our research employs established communications theories of propaganda, emphasizing propaganda analysis as a tool for understanding DiE and developing institutional responses. We examine two nuclear emergency cases—the Fukushima Daiichi Nuclear Power Plant (FNPP) disaster and the occupation of the Zaporozhzhian Nuclear Power Plant (ZNPP) in Ukraine—to uncover the disruptive impact of disinformation on emergency communication.
Through the categorization of propaganda into black, gray, and white types, we analyze tactics employed in DiE, shedding light on strategic intent, transparency, and veracity. Case studies reveal instances of false narratives propagated by governments and media channels, influencing public perception and exacerbating tensions. While our study has not observed significant AI-enabled DiE, we highlight its use in DiE identification. However, state-led counter-disinformation initiatives face challenges, including jurisdictional issues and calls to protect free expression.
We posit the necessity of non-state-led networked governance structures, drawing parallels with successful cybersecurity governance models. These frameworks, informed by interdisciplinary insights and operating independently from states, are primed to address the multifaceted challenges posed by DiE. Addressing participatory, structural, and operational impediments within existing content moderation governance mechanisms emerges as a pivotal imperative for the realization of effective strategies.
 
 
Presentation 2
 
HOW GOVERNING TAKES PLACE: THE ORIGINS AND EVOLUTION OF MIS- AND DISINFORMATION POLICY IN AUSTRALIA
Nadia Alana Jude
Queensland University of Technology, Australia
 
Nation-state regulation of digital platforms is on the rise, marking a global shift from platform self-regulation to increased government intervention. In Australia, much attention has been directed towards the problems of misinformation and disinformation. First considered a “remote threat”, local concerns have been supercharged by issues such as COVID-19, the 2019 Australian Bushfires and 2023 Indigenous Voice to Parliament Referendum. Inquiries around mis- and disinformation’s impact on public interest journalism, advertising and news markets, and elections coalesced to produce the Australian Code of Practice on Disinformation and Misinformation.
However, as the Australian Government seeks to strengthen this Code via draft Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023, global scholars are hotly debating representations of the mis- and disinformation problem. Four primary critiques include that the problem is often (1) overly tied to technology platforms; (2) narrowly focused on individual pieces of bad content; (3) erroneously framed as a new problem of epistemology; and (4) commonly cast as an issue of deviant or deficient individuals, rather than a problem of social and political structure.
Mobilising Foucault’s concept of ‘problematisation’ (1983) and Bacchi’s ‘what’s the problem represented to be’ framework (2009), this paper critically examines representations of the misinformation ‘problem’ in Australia since 2016. It asks:
(1) How has the problem been represented by regulators and governments in Australia since 2016?
(2) What kinds of policy solutions have been encouraged, and which alternatives have been closed off?
(3) Whose voices and identities have been privileged, and whose have been ignored?
 
 
Presentation 3
 
Governing and defining misinformation: A longitudinal study of social media platforms policies
Christian Katzenbach(1), Daria Dergacheva(1), Vasilisa Kuznetsova(1), Adrian Kopps(2)
1: University of Bremen, Germany; 2: Alexander von Humboldt Institute for Internet and Society, Berlin, Germany
 
This study explores how the governance and conceptualization of misinformation by five major social media platforms (YouTube, Facebook, Instagram, X/Twitter, TikTok) has changed from their inception until the end of 2023. Applying a longitudinal mixed-method approach, the paper traces the inclusion of different types of misinformation into the platforms' policies and examines periods of convergence and divergence in their handling of misinformation.
The study identifies an early topical focus on spam and impersonation, with a notable shift towards political misinformation in the 2010s. Additionally, it highlights significant inter-platform differences in addressing misinformation, which illustrates the fluid nature of definitions of misinformation, as well as the influence of external incidents (elections, conflicts, COVID-19) and regulatory, societal, and technological developments on policy changes.
 
 
Presentation 4
 
The dark side of LLM-powered chatbots: misinformation, biases, content moderation challenges in political information retrieval
Joanne Kuai(1), Cornelia Brantner(1), Michael Karlsson(1), Elizabeth Van Couvering(1), Salvatore Romano(2)
1: Karlstad University, Sweden; 2: Universitat Oberta de Catalunya, Spain
 
This study investigates the impact of Large Language Model (LLM)-based chatbots, specifically in the context of political information retrieval, using the 2024 Taiwan presidential election as a case study. With the rapid integration of LLMs into search engines like Google and Microsoft Bing, concerns about information quality, algorithmic gatekeeping, biases, and content moderation emerged. This research aims to (1) assess the alignment of AI chatbot responses with factual political information, (2) examine the adherence of chatbots to algorithmic norms and impartiality ideals, (3) investigate the factuality and transparency of chatbot-sourced synopses, and (4) explore the universality of chatbot gatekeeping across different languages within the same geopolitical context.
Adopting a case study methodology and prompting method, the study analyzes responses from Microsoft’s LLM-powered search engine chatbot, Copilot, in five languages (English, Traditional Chinese, Simple Chinese, German, Swedish). The findings reveal significant discrepancies in content accuracy, source citation, and response behavior across languages. Notably, Copilot demonstrated a higher rate of factual errors in Traditional Chinese while exhibiting better performance in Simplified Chinese. The study also highlights problematic referencing behaviors and a tendency to prioritize certain types of sources, such as Wikipedia, over legitimate news outlets.
These results underscore the need for enhanced transparency, thoughtful design, and vigilant content moderation in AI technologies, especially during politically sensitive events. Addressing these issues is crucial for ensuring high-quality information delivery and maintaining algorithmic accountability in the evolving landscape of AI-driven communication platforms.
 
Thursday October 31, 2024 09:00 - 10:30 GMT
Octagon Council Chamber

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link