Loading…
Friday November 1, 2024 15:30 - 17:00 GMT
Session Chair: Bernhard Rieder
 
Presentation 1
 
CHEATGPT? THE REALITIES OF AUTOMATED AUTHORSHIP IN THE UK PR AND COMMUNICATIONS INDUSTRIES
Tanya Kant
University of Sussex, United Kingdom
 
Drawing on interview and survey data from content writers in the UK and PR industries, this paper will critically and empirically explore content creators' engagements with generative text AI in relation to creative authorship and expertise. The project will utilize a critical framework of algorithmic literacy to empower so-far overlooked stakeholders of AI tool use in this creative industry sector.
The paper will present findings from a survey of 1,000 PR and communications content writers and their managers/ employers and from 30-50 follow-up interviews with the same stakeholders. The survey is due to launch in April 2024 with findings ready to report after September 2024. This paper will disseminate the findings from the full project, exploring the realities of automated authorship and the opportunities and limitations that algorithmic literacy might bring in enhancing smaller stakeholder algorithmic empowerment and expertise. Pilot data suggests that a) generative text AI is increasingly being used by content writers in ways that challenge speculative forecasts of generative text use and that b) these tools are useful for saving time, idea generation and synthesising existing text, but cannot (yet) be used to replicate or generate authorially convincing tone of voice or brand identity. Such indicative findings suggest that algorithmic literacy could be used to create dialogues in workplaces that foreground the problems of automated authorship - especially in terms of human expertise and professional creative subjectivity.
 
 
Presentation 2
 
Global press coverage of human and post-human abilities in ChatGPT applications
Aya Yadlin(1), Avi Marciano(2)
1: Bar-Ilan University; 2: Ben-Gurion University of the Negev
 
In November 2022, the tech company OpenAI launched a ground-breaking chatbot model, ChatGPT. This unprecedented chatbot, characterized by an ease of use for lay internet users, gained immediate popularity and attracted extensive media attention. This article examines global press coverage of ChatGPT in peak reporting dates over the first full year of its existence. Based on a qualitative holistic narrative analysis, our findings point to two narrated scapes of political fear in the coverage of ChatGPT: The fear of the machine and the fear of the human. These attest to the collective imagining of an intensified future, where post-humanist interaction with political information is associated with exploitation, propaganda, and polarization of existing political rifts. We draw on the case study to articulate journalists’ role in signaling instability in the current political media ecosystem, and their construction of a techno-moral framework for society. We discuss an important blind-spot in journalists’ fulfilment of their normative role in fostering technology-informed citizens globally.
 
 
Presentation 3
 
GPT and the Platformization of the Word: The Case of Sudowrite.
Daniel Whelan-Shamy
Queensland University of Technology, Australia
 
In this extended abstract, I argue that OpenAI ’s Generative Pre-Trained Transformer (GPT) Large Language Models (LLMs) are increasingly being positioned as platforms through the extension of various GPT models into different platforms and applications. My interest is directed at applications that are increasingly playing a part in processes of writing and authorship within creative and cultural industries and is particularly focused on the role of LLMs in the creation of texts and how they are potentially shaping this process. To empirically ground my research and its arguments, I apply the walkthrough method on the writing application Sudowrite. This work has a dedicated interest in responding and contributing to growing scholarly conversations around Artificial Intelligence technologies and various forms of power. An overarching hypothesis of this work is that the enrollment of a profit-driven platform company such as OpenAI in the creative process begets a position of power in that if GPT becomes basic infrastructure for writing and authorship, it will, potentially, embed and naturalise certain ways of doing and organising written communication, creativity, and expression in the interests of corporate power.
 
 
Presentation 4
 
Assessing Occupations Through Artificial Intelligence: A Comparison of Humans and GPT-4
Paweł Gmyrek(1), Christoph Lutz(2), Gemma Newlands(3)
1: International Labour Organization, Switzerland; 2: BI Norwegian Business School, Norway; 3: University of Oxford, UK
 
Large language models (LLMs) such as GPT-4 have raised questions about the changing nature of work. Research has started to investigate how this technology affects labor markets and might replace or augment different types of jobs. Beyond their economic implications in the world of work, there are important sociological questions about how LLMs connect to subjective evaluations of work, such as the prestige and perceived social value of different occupations, and how the widespread use of LLMs perpetuate often biased views on the labor markets reflected in their training datasets. Despite initial research on LLMs’ world model, their inherent biases, attitudes and personalities, we lack evidence on how LLMs themselves evaluate occupations as well as how well they emulate the occupational evaluations of human evaluators. We present a systematic comparison of GPT-4 occupational evaluations with those from an in-depth, high-quality survey in the UK context. Our findings indicate that GPT-4 and human scores are highly correlated across all ISCO-08 major groups for prestige and social value. At the same time, GPT-4 substantially under- or overestimates the occupational prestige and social value of many occupations, particularly emerging occupations as well as stigmatized or contextual ones. In absolute terms, GPT-4 scores are more generous than those of the human respondents. Our analyses show both the potentials and risks of using LLM-generated data for occupational research.
 
Friday November 1, 2024 15:30 - 17:00 GMT
INOX Suite 1

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link