Loading…
Thursday October 31, 2024 09:00 - 10:30 GMT
Session Chair: Nicolette Little
 
Presentation 1
 
LLMs and the generation of moderate speech
Emillie de Keulenaar
University of Groningen, The Netherlands
 
For the past year, using large language models (LLMs) for content moderation appears to have solved some of the perennial issues of online speech governance. Developers have promised 'revolutionary' improvements (Weng, Goel and Vallone, 2023), with large language models considered capable of bypassing some of the semantic and ideological ambiguities of human language that hinder moderation at scale (Wang et al., 2023). For this purpose, LLMs are trained to generate “moderate speech” – that is, not to utter offensive language; to provide neutral, balanced and reliable prompt outputs; and to demonstrate an appreciation for complexity and relativity when asked about controversial topics. But the search for optimal content moderation obscures broader questions about what kind of speech is being generated. How does generative AI speak “moderately”? That is: under what norms, training data and larger institutions does it produce “moderate speech”? By examining the regulatory frameworks AI labs, comparing responses to moderation prompts across three LLMs and scrutinising their training datasets, this paper seeks to shed light on the norms, techniques and regulatory cultures around the generation of “moderate speech” in LLM chat interfaces.
 
 
Presentation 2
 
ONLINE POSITIVE SOCIAL ACTIONS (OPSA) AS TECHNO-SOCIAL AFFORDANCES: A FRAMEWORK TO ANALYZE DIGITAL SOCIALITY
Roni Danziger(1), Lillian Boxman-Shabtai(2)
1: Lancaster University, UK; 2: The Hebrew University, Israel
 
Sociability is ostensibly the raison d'être of social media. While studies have explored the promotion of relationships online, little attention has been given to the interactional and discursive resources that users employ to that end vis-à-vis platform design features. This paper offers an analytical framework for studies of digital interaction that bridges micro and macro approaches to social media. Conceptualizing online positive social actions (OPSA) as social affordances of social media platforms, the framework includes four evaluative components: (1) positive social actions (e.g., gratitude), (2) platform technological features (e.g., tagging), (3) interpretation (by recipients and overhearing audiences), and (4) social outcome (which might be pro- or antisocial). OPSA allows an analysis of the different social outcomes generated in the intersection between platform design and the communicative actions of users. The paper demonstrates the framework on three examples of successful, mock, and failed OPSA, and discusses avenues for future research utilizing the OPSA framework.
 
 
Presentation 3
 
Auditing the Closed iOS Ecosystem: Is there Potential for Large Language Model App Inspections?
Jennifer Pybus(1), Signe Sophus Lai(2), Stine Lomborg(2), Kristian Sick Svendsen(2)
1: York University; 2: University of Copenhagen, Denmark
 
Scholarly attention is increasingly paid to the dynamic and embedded ways in which third-party tracking has become widespread in the mobile ecosystem. Much of this research has focused almost exclusively on Android applications in Google Play and their respective infrastructures of data capture, even though Apple and their App Store have a significant market share. To examine Apple’s ecosystem and to overcome the challenge of opening and inspecting large quantities of apps, we have explored the role that a large language model (LLM) can play in enabling this process? We have opted to use Chat GPT-4 because of how it has been designed to assist humans in both interacting and understanding code. We therefore ask: Can Chat-GPT assist scholars interested in Apple’s mobile ecosystem through the auditing of Apple app (IPA) files?
The outcomes of the explorative study include 1) an evaluation of Chat-GPT4 as an assistant for reading code at scale as well as its potential for automating processes involved in future app monitoring and regulation; 2) a deep dive into the shortcomings of Chat-GPT4 when it comes to interventions into the mobile ecosystems; and 3) a discussion of ethical issues involved in harnessing LLM for forwarding a research agenda around Apple’s otherwise understudied ecosystem. Our exploration aims to evaluate a methodological intervention that could bring more observability into an otherwise closed ecosystem that promises the preservation of end-user privacy, without any meaningful external oversight.
 
 
Presentation 4
 
DOES ALGORITHMIC CONTENT MODERATION RPOMOTE DEMOCRATIC DISCOURSE? RADICAL DEMOCRATIC CRITIQUE OF TOXIC LANGUAGE AI
Dayei Oh(1), John Downey(2)
1: Helsinki Institute for Social Sciences and Humanities, University of Helsinki, Finland; 2: Centre for Research in Communication and Culture, Loughborough University
 
Algorithmic content moderation is becoming a common practice employed by many social media platforms to regulate ‘toxic’ language and to promote democratic public conversations. This paper provides a normative critique of politically liberal assumption of civility embedded in algorithmic moderation, illustrated by Google’s Perspective API. From a radical democratic standpoint, this paper normatively and empirically distinguishes between incivility and intolerance because they have different implications for democratic discourse. The paper recognises the potential political, expressive, and symbolic values of incivility, especially for the socially marginalised. We, therefore, argue against regulation of incivility using AI. There are, however, good reasons to regulate hate speech but it is incumbent upon the users of AI moderation to show that this can be done reliably. The paper emphasises the importance of detecting diverse forms of hate speech that convey intolerant and exclusionary ideas without using explicitly hateful or extremely emotional wording. The paper then empirically evaluates the performance of the current algorithmic moderation to see whether it can discern incivility and intolerance and whether it can detect diverse forms of intolerance. Empirical findings reveal that the current algorithmic moderation does not promote democratic discourse, but rather deters it by silencing the uncivil but pro-democratic voices of the marginalised as well as by failing to detect intolerant messages whose meanings are embedded in nuances and rhetoric. New algorithmic moderation should focus on the reliable and transparent identification of hate speech and be in line with the feminist, anti-racist, and critical theories of democratic discourse.
 
Thursday October 31, 2024 09:00 - 10:30 GMT
SU View Room 5

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link