Loading…
Saturday November 2, 2024 09:00 - 10:30 GMT
Session Chair: Nabila Cruz De Carvalho
 
Presentation 1
 
MAPPING AND CRITIQUING THE NEW VENDOR ECOSYSTEM IN THE ‘TRUST AND SAFETY’ INDUSTRY
Ariadna Matamoros-Fernandez
Digital Media Research Centre, QUT, Australia
 
This paper is part of a larger project that identifies the technical, ethical, and contextual challenges of the EU ‘trust and safety’ vendor ecosystem (‘Trust & Safety as a Service’). Through an analysis of these vendors’ public documentation and interviews with them, the project maps out this emerging industry, critically analyses its understandings of online harms and how to mitigate them, and evaluates its approach to global principles on online safety (including a commitment to human rights frameworks, multi stakeholder engagements, and openness and transparency). This paper presents the results of Phase I: an examination of the websites of vendors with enough funding to play a key role in the industry.
 
 
Presentation 2
 
The Political Economy of Trust and Safety Vendors: How Regulation, Venture Capital, and AI are Altering the Governance of Platforms
Lucas Wright
Cornell University, United States of America
 
Trust and safety vendors have become a favorite among venture capital firms, and the idea of outsourcing trust and safety work is compelling to many platforms that never wanted to be in the business of making these decisions. This paper examines the political economy of these vendors, or how market forces, an influx of capital, and new platform regulations like the EU's Digital Service Act are shaping the services offered by these vendors. Through critical discourse analysis, interviews with founders and employees, and fieldwork at a large annual professional conference, I examine how these forces are altering platform governance. I find that much of the work of these vendors is discursive work to establish and sell various forms of expertise. I also find that a shift to external processes for governance complicates global efforts to impose accountability and transparency on platforms, as it muddies responsibility for decision-making and opens up possibilities for arbitrage. As these companies position themselves as the standard for safety in platforms, generative AI, and video, it is essential to understanding how these forces shape the governance they impose on users.
 
 
Presentation 3
 
Trust in alternative governors: Exploring user confidence in companies, states and civil society in platform content moderation
Dennis Redeker
University of Bremen, Germany
 
Social media platforms such as Facebook, X and TikTok are the “new governors” or “custodians” of the Internet (Klonick 2018; Gillespie 2018). How they moderate global speech online affects the communication practices of billions of people and it can make or break social movements and political resistance, and generally be a critical risk factor for human rights violations. These platforms are increasingly joined by states, international organizations, civil society, journalists and others in defining and interpreting the limitations of speech online, be it through legislation, guidelines or by helping platforms to distinguish misinformation from legitimate content. In parallel, researchers ponder questions concerning the legitimacy of various approaches of content moderation (Haggart & Keller 2021; Suzor 2019), which must extend to the question of which actors ought to fulfill which function in the moderation of content. A legitimate content moderation constellation (and potentially division of labor) is arguably one that is perceived to be legitimate by the “governed” themselves (for whatever qualities are appraised by them). As of today, however, we have little empirical knowledge about what users actually think about content moderation. The current paper presents novel empirical evidence on how users perceive platform content moderation and how they perceive content moderation roles of different governors of speech. The quantitative analysis is based on a survey of more than 15,000 Facebook and Instagram users in 33 countries in the Global South and Eastern Europe, which was conducted in six languages in late 2022 and early 2023.
 
 
Presentation 4
 
Putting Normative Values to Work: The Organizational Practices of Trust and Safety Teams
Tomás Guarna
Stanford University, United States of America
 
What are the organizational practices in platform companies that enable content moderation? An ethnographic study of the Trust and Safety field, the emerging professional field of technology professionals that conduct content moderation among other roles, can help us understand the organizational challenges that practitioners in technology companies face. Through 35 in-depth interviews, I find that Trust and Safety professionals employ a series of persuasive strategies to advance their normative goals. When the goals of different Trust and Safety teams within a same company conflict, Trust and Safety teams enroll internal and external stakeholders to make decisions by consensus based on agreed-upon “tradeoffs.” When the goals of Trust and Safety teams conflict with those of teams across the company, Trust and Safety professionals rely on personal relations and conventional social norms. When these decisions escalate, they “make a case” for their goals to be prioritized by relying on company mission and often quantified representations of risk to persuade leadership. But frequent and sudden changes in foundational social norms of the company or teams present challenges for this work. Ultimately, this study work shows that the achievement of normative goals of in contemporary platforms are structurally conditioned by the incentives, designs and limitations of platforms companies.
 
Saturday November 2, 2024 09:00 - 10:30 GMT
Uni Central

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link