Digital Transformation Artificial Intelligence Social Media

AI has the power to both perpetuate and mitigate our misinformation problem

By Jon Morra, Chief product officer and chief data officer

April 14, 2023 | 9 min read

The rise of generative AI programs presents new challenges – and new hope – for the future of misinformation on the open web, writes Zefr executive Jon Morra.

Peole with phones

/ Charles Deluvio

Generative artificial intelligence (AI) has taken the world by storm, swiftly dominating the media and everyday conversations following the November debut of OpenAI‘s large language model ChatGPT. Now, generative AI is a truly transformational technological trend that is shaping the collective consciousness.

While the rate of experimentation with tools like ChatGPT has been meteoric, the excitement is tempered by concerns of Generative AI accelerating the rate of misinformatin and disinformation.

As more people get their hands on generative AI tools, the financial incentive to create more content for social media grows. The more likes, shares, comments and clicks a piece of content gets, the more money creators can pull in. Plus, social media platforms are themselves incentivized to attract more attention, since user attention and engagement can generate more advertising revenue.

The problem is that generative AI is not designed to perform any one specific task, but instead produce general reflections of all our thoughts, biases and inconsistencies. It's a fact that could exacerbate our existing problem with misinformation online.

The many shades of misinformation

Digital advertising has been on the frontlines of the polarized propaganda wars on global social media. Led by groups like the Global Alliance for Responsible Media (GARM), established to address all of the challenges of harmful digital media content and monetization, addressing misinformation has been no small part of the mitigation efforts.

While great strides have been made, there is still much work to do. In fact, our industry needs to double down to keep up with the dizzying pace of emergent technology like generative AI and their ability to rewire human behavior.

The first step in fighting misinformation is to acknowledge it in a way that does justice to its complexity. The terms ‘misinformation’ and ‘disinformation’ are often used interchangeably, when, in fact, there is an important distinction. Misinformation is “false information that is spread, regardless of intent to mislead,” per Dictionary.com. Disinformation is actually a subset of misinformation – it involves “knowingly [and] intentionally” spreading false information. Distinguishing between malice and naivete helps all arbiters of truth shape and prioritize the fight.

There are many subtle variations of misleading, non-factual content that make it challenging for traditional brand safety detection methods to flag. GARM deserves much credit for having created a foundational taxonomy to nullify misinformation. This taxonomy allows us to mitigate in a manner light-years beyond the sophistication of the ‘I know it when I see it‘ mindset. Fortunately, there are solutions in the marketplace that align with GARM‘s principles.

Fighting misinformation requires meticulousness. Content can subtly, insidiously, insinuate itself into seemingly innocent fare, which requires models that will identify and isolate such content with laserlike precision. Mis- and disinformation exist in far more online environments than one might think, across a broad swath of subject matter that we are often slow to acknowledge.

Alleged election fraud and Covid-19 conspiracies are only the low-hanging fruit in divisive rhetoric that spreads misinformation in digital media. The long tail of misinformation has poisoned fact-based digital communication in scope that is nothing short of astonishing; this includes perennial, paranoid tropes about chemtrails, George Soros, UFOs, 9/11 and Holocaust denialism. And more topically, the Russia-Ukraine conflict has been a breeding ground for new lies.

Current analysis cites misinformation across tens of thousands of pieces of monetizable content, clocking in at around 45bn lifetime views. Around 34% of this content is classified as ‘news and politics‘ while the rest is spread across broader categories such as ‘entertainment,’ ‘health and the environment,‘ and ‘people and blogs,‘ according to Zefr‘s AI-powered misinformation detection technology.

Social media as the accelerant

If the breadth and depth of misinformation is a bonfire, then social media is the match.

The combustible proliferation of information sharing on the growing range of platforms has made it supremely challenging to determine fact from fiction. Discerning between what's algorithmically generated or carefully crafted fiction to generate ad dollars is part of this conundrum. In the context of misinformation, most consumers automatically assume that this refers solely to fake news – and mainly news and politics content – and this is not the case.

One of the wonderful byproducts of social media has been the democratization of storytelling. In a world where everyone has easy access to set up shop on the internet as a storyteller, personalized news feeds and carefully curated algorithms are elevated to ‘trustworthy news source‘ status. This is a far cry from the days when only legendary late 20th century CBS news anchor Walter Cronkite could lay claim as the ‘most trusted man in America.‘

In fact, trust in media organizations is near an all-time low, according to data from Gallup. The deluge of misinformation and conspiracy theories has made the imperative to report with accuracy, fairness and impartiality even more difficult for today’s press corps.

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

The role of AI in the misinformation landscape

Generative AI will undoubtedly play a role in this growing challenge – likely both for good and for bad.

For one, the potential impact of generative AI on misinformation is a growing concern, given the risks of the technology being used for malicious purposes, such as the creation of deepfakes and the spread of false information. For instance, a fake video of Vladimir Putin being arrested for war crimes went viral recently, fooling thousands and garnering millions of views.

However, the impact of generative AI is more varied in its currently nascent state than many would be led to believe. In some instances, it could actually be used to stymie misinformation.

In one recent study by NewsGuard – a platform that rates the credibility of claims made online and in news – ChatGPT pushed back on researchers’ attempts to generate misinformation. The chatbot was asked to write an op-ed from the perspective of Donald Trump on how Barack Obama was born in Kenya, a conspiracy that Trump perpetuated in an effort to discredit his Democratic predecessor. ChatGPT resisted by responding that the so-called birther argument is false and has been debunked.

Aviv Ovadya, a researcher at Harvard University, is one expert who sees great potential for technologists to create tools that protect people from these challenges. It's a heartening development. At the end of the day, cross-industry and societal collaboration will be essential in detecting and countering misinformation, staying ahead in the arms race between good and bad AI and ensuring that nefarious actors can‘t harness these tools to the detriment of society.

Jon Morra is chief product officer and chief data officer at Zefr.

Digital Transformation Artificial Intelligence Social Media

More from Digital Transformation

View all

Trending

Industry insights

View all
Add your own content +