The Drum Awards Festival - Extended Deadline

-d -h -min -sec

Brand / Advertising Social Media Ads Brand Suitability

Brands must defund misinformation to protect their online reputations - here's how

Zefr

|

Open Mic article

This content is produced by a publishing partner of Open Mic.

Open Mic is the self-publishing platform for the marketing industry, allowing members to publish news, opinion and insights on thedrum.com.

Find out more

March 10, 2023 | 5 min read

By Emma Lacey, SVP EMEA, Zefr

By Emma Lacey, SVP EMEA, Zefr

News travels fast in our hyper-connected world; so fast that it’s difficult to verify its credibility before it’s shared across the internet. This means that users are often inadvertently reading fake news – and this has real-life social consequences.

False claims about vaccine harmfulness, for example, can influence individuals’ decisions to comply, causing repercussions for the wider community. And for brands, appearing to endorse such content, however unintentionally, can have a damaging impact on their image, not to mention their campaign performance.

With misinformation continuing to evolve alongside digital media and technology, it’s becoming harder to determine what’s factual and what’s not, and to avoid unsuitable content before it goes viral. So what exactly is misinformation, and how can we tackle it effectively in such a content-rich digital landscape?

What is misinformation?

Misinformation is the spread of false information. It differs from disinformation which carries the intent to mislead. Propaganda is a good example of disinformation – campaigns purposefully designed to promote a political agenda or point of view as we have seen throughout the Ukraine crisis.

According to a YouGov survey for Readly, 65% of Brits are worried about the spread of fake news, and almost a quarter (24%) reported being exposed to it on a weekly basis. With the popularity of social media and the rise of user-generated content (UGC), it’s becoming increasingly difficult to detect misinformation and stop it in its tracks. The complexity and dynamism of video content especially renders traditional safety strategies such as keyword blocklists ineffective. Speech is much more nuanced than the written word and it is constantly evolving and rapidly shared.

Moreover, influencers are often the creators of such content, and they have vast followings of loyal fans that trust them; if content creators are unknowingly sharing false information, the chances are that it will be taken as gospel.

The repercussions of fake news

The speed and scale with which untruthful content can be distributed across the internet means its prevalence alone can often be seen as verification of fact. If enough people are sharing it, and believe it, surely it must be true. This is what makes it so dangerous; it has far-reaching consequences and influences audiences’ beliefs, opinions and behaviors.

From an advertising perspective, it’s important to defund misinformation to help promote a responsible digital ecosystem. Having the right tools in place to accurately detect falsehood will ensure brands can avoid unintentionally aligning with, and monetizing, it. Moreover, not only does poor content adjacency affect brands’ reputations, but it also leads to ineffective campaigns. Ads that appear next to fake news are unlikely to reach their intended audiences, resulting in wasted spend and poor ROI.

How AI and human moderation can tackle harmful content

Brands traditionally relied on keyword blocklists to remove inventory with potentially harmful content. But these legacy tools used broad terms and categories that were not updated often enough to incorporate new meanings, and which often excluded perfectly harmless content along with malicious sites. These simplistic systems are not robust enough to tackle misinformation at scale, especially across online video on social platforms.

Sophisticated, AI-powered technology is needed to accurately categorize dynamic content, taking into account its context and sentiment to determine its suitability. Brands should gather data from fact-checking organisations to train machine learning algorithms, allowing them to spot phrases typically used in unreliable content. Moreover, brands should look to industry bodies such as the Global Alliance for Responsible Media (GARM) – which added misinformation to its Brand Safety and Suitability framework in 2022 – for categorization guidance and independent verification to ensure accuracy.

Finally, there is still no replacement for human moderation. Brands should incorporate teams to manually check AI classifications wherever they can, as there will always be anomalies that technology cannot process. A pair of human eyes adds an extra layer of precision and safety.

There’s no denying that misinformation is a significant problem in our society, and the digitalization of content has exacerbated its spread. A two-pronged approach of AI-enabled technology and human moderation can help brands accurately identify, and ultimately avoid, unsuitable content, helping to defund misinformation for a safer digital ecosystem. In turn, they can deliver highly effective campaigns that reach suitable audiences in contextually relevant environments, hitting key targets and delivering ROI.

Brand / Advertising Social Media Ads Brand Suitability

Trending

Industry insights

View all
Add your own content +