AI requires human oversight if it’s to be trustworthy
Powerful AI models like ChatGPT are revolutionizing journalism, PR, marketing and other creative industries. But without the close supervision of human beings, writes Carma co-managing partner Richard Bagnall, these tools can potentially cause more harm than good.
ChatGPT occasionally generates factually inaccurate information which can be difficult for humans to detect. / Adobe Stock
Artificial intelligence (AI) is rapidly transforming our media and information landscape.
Articles created by ChatGPT are becoming increasingly common; AI-generated images of public figures like Pope Francis and Donald Trump have been taking social media by storm; a media outlet in Kuwait has even begun harnessing AI to deliver news bulletins. These developments and others in recent months have set a precedent for an AI-driven world in which our modes of receiving, transmitting and sharing information will be radically altered.
The current wave of excitement surrounding AI has given many industries new confidence. The technology is rapidly being adopted by many brands, and research from Accenture has shown that it could potentially double yearly economic growth rates by 2035. Generative AI – models that are designed to create images, videos, text and other creative assets - could prove to be particularly useful to agencies, helping them to save significant amounts of time and effort.
With this great power to create new content, however, comes great responsibility.
Generative AI models can make significant errors with misleading confidence. Bearing that in mind, brands should be careful to consider the dangers posed by the adoption of generative AI, such as the spread of misinformation and massive reputational harm.
A new era for content
Generative AI models like ChatGPT create content in response to text-based prompts. Many of these tools are also easily accessible, adding to their huge potential within creative industries like journalism, PR and marketing.
Thanks to AI, professionals in these industries will soon experience a huge shift in their day-to-day workflows. They’ll no longer need to spend hours coming up with new content ideas or drafting large amounts of text since generative AI can now easily handle these tasks. The technology can produce content at a speed that’s impossible for humans to match - and more importantly, it learns from each user interaction, which means it’s constantly improving.
AI, in other words, is a goldmine for creative professionals who are looking to be less burdened by monotonous and mundane tasks and broaden their skills in other domains. But the technology is far from perfect.
Suggested newsletters for you
The Ignorance of AI
Amidst all the current excitement, it’s easy to fall victim to the idea that AI is some kind of alien superintelligence. In reality, it’s simply algorithms that have been programmed by humans to achieve specific, narrow goals - such as producing coherent text, in the case of ChatGPT. If left unchecked by human eyes, AI could perpetuate misinformation and harmful biases. For brands, that could lead to reputational harm and financial losses. At a broader societal level, it could lead to far worse.
While ignorance and confidence might be considered positive traits for aspirational human beings who are starting their career in journalism, the same cannot be said for AI models that are being used to generate (supposedly reliable) information. But these tools exist, and they aren’t going anywhere anytime soon. So where do we go from here?
From a governmental perspective, we’ll definitely need fast and smart legislation - but it remains to be seen whether or not that will even be possible. From a brand perspective, we’ll need smart and sophisticated strategization, including realistic risk analysis.
We can’t trust AI
If we want to continue enjoying the benefits of AI, in all of its flaws and untrustworthiness, then human oversight must be emphasized more - not less. We must ignore the hype, thinking not about the “Wow” factor, but rather about the “How” factor: How does this technology work, how can we understand its strengths and weaknesses and how can it maximally serve us?
It is only through a realistic and thorough understanding of AI, as well as through close human supervision, that we’ll be able to minimize the risks and maximize the benefits that the technology presents. Without human beings in the loop, AI simply can’t be trusted.
Richard Bagnall is co-managing partner of Carma. For more on the latest happening in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter here.