Is ChatGPT a threat to media and content production?
Have you been happily playing over the past few months with the endless possibilities that ChatGPT offers those of us in content production? It can instantly produce indicative questions for interviews, write client briefs, suggest ledes, create meta descriptions and compose social media posts, including hashtags.
Granted, those generated responses demanded an editor’s heavy touch to guide the text in a usable direction, but the technology is only in its infancy. What will happen as its capabilities and quality evolve? Fundamentally, will ChatGPT threaten our jobs in media and content production?
In the spirit of fairness, I gave the AI chatbot a chance to defend itself by asking it that very question:
"ChatGPT is not a threat to content production. It is a tool that can aid content creators in generating ideas and writing text, but it cannot replace human creativity and the unique perspectives that only a human can bring. Additionally, while ChatGPT can generate text based on given prompts, it is important to note that its responses are not always accurate or appropriate, and they may require editing or fact-checking before being used in any final product."
The succinct answer was reassuring until the realisation that there are probably a few kilobytes of AI bias at play, along with some self-protection. Still, it is not as though we haven’t had many opportunities to prepare for the impending impact and fight back against the machine.
I remember casually dismissing the 2014 news of an algorithm automatically writing a brief, factual story about a magnitude 4.4 earthquake in southern California for the LA Times website. Since then, I have unconsciously embraced countless other work tools driven by AI and machine learning, ranging from Google Translate and the speech-to-text transcription application Otter.ai to YouTube’s automatic captioning to Grammarly, which caught several typos in this blog.
Too much, too soon
For a human perspective on the threat from ChatGPT and Generative AI to my job as a senior editor in the media sector, I contacted Prof Dr Lucy Kueng, an international expert on mastering digital transformation. She began by expressing concern about its potential impact on journalism and, by extension, its role in society and democratic processes.
"The early crop of AI tools can’t distinguish between fact and fiction or overcome bias. Algorithmically generated news can quickly become misinformation at scale, and that misinformation can flow into the AI models, perpetuating it," she says, further warning that the competitive dynamics could also worsen.
"AI lowers entry barriers since anyone can produce professional-seeming content, it increases the threat of new entrants (with lower cost business models), reduces the bargaining power of suppliers like journalists and other creators, and increases the threat of substitutes. These factors will fuel competitive rivalry because the sector has many locked-in players with a high emotional commitment to the field fighting for the same customer group."
Kueng, a Senior Visiting Research Fellow at the Reuters Institute, Oxford University, believes it will be up to industry leaders to decide how this plays out. "Will it take out the ‘grunt work’, the repetitive, time and cost-intensive tasks, and leave humans to focus on more creative, intellectually demanding and rewarding work, or will it take out some of the humans too?"
Getting any immediate reassurance about the danger of these intelligent technologies putting me out of a job appears premature. Time will tell… Until then, we can only continue producing and publishing content for those AI systems to exploit as training data for free. If a machine eventually kills off content producers, please don’t let ChatGPT generate our obituary.