The Drum Awards for Marketing - Extended Deadline

-d -h -min -sec

Fake News Future of Media Business of Media

Can categorizing misinformation crises like we do natural disasters help stem the tide?

Author

By John McCarthy, Opinion Editor

March 24, 2021 | 10 min read

Last week, independent fact-checking organization Full Fact announced it was creating a five-level scheme to rank the severity of misinformation events, inspired by its first-hand knowledge of the damage misinformation can cause. The Drum explores what Full Fact – and marketers – can do to tame the wickedest wiles of the wild worldwide web before we’re submerged.

Can categorizing misinformation crises like we do natural disasters help stem the tide?

Can categorizing misinformation crises like we do natural disasters help stem the tide?

Misinformation was Dictionary.com’s word of the year 2018, marking the moment society realized it was up to its ankles in a flood of ’fake news’. We’ve since seen misinformation kill during the pandemic, and the tide rise higher yet when a hate-conspiracy-fueled sacking of Congress complemented swelling ranks of conspiracists and anti-vaxxers. Governments, tech giants and publishers have now started considering the defenses needed to manage the information flood. And it’s not too dissimilar to how organizations manage real-world natural disasters.

Calculating crisis

In the pandemic, editors have struggled to contextualize misinformation’s threat to human life. Did former President Trump’s off-the-cuff suggestion that people test bleach Covid cures cause more harm than a slate of QAnon-inspired crimes? What about genocides enabled, if not catalysed, by state disinformation? Were we complacent to the growth of anti-vax, anti-5G and Covid-skeptics? Did we miss when the engagement machine saw these conspiracies absorb popular racists tropes, such as ’Muslims as super-spreaders’ or ’Jews are vaccine hoarders AND virus creators’ (surely pick one)? Most recently, we’re seeing a steep surge in hate crimes caused by terms such as the ‘China virus’.

2020 proved that society needs a framework for identifying, tracking and tackling misinformation. So last week, Full Fact tried to do that, launching an eight-week consultation to develop a five-level scheme for fighting misinformation. The group has tracked how news events trigger conspiracy theories and breakdowns in trust for more than a decade. Now it’s looking to classify these events as one would a natural disaster or a terror threat.

We’ve already lived through the most severe level (five) in the early days of life-threatening Covid-19 misinformation. Then there’s level three – lies propagated around the Notre Dame fire sometimes stirring racial friction. Bubbling below are the widespread 5G conspiracy theories.

Eight criteria are used to measure severity, ranging from the scale, rate of spread, reach and range, and the threat to life – including minority groups, for example. Are we talking about teens eating Tide pods, or doing the Kylie Jenner challenge on their lips, or did the information inspire yet another mass shooting?

Will Moy, chief executive of Full Fact, says his scale has no level zero. “There’s always misinformation in an open society.“

And while it’s hard to believe we’ve just lived through a level five information crises, Moy explains: “We had a global pandemic where the information dictates whether we – or our friends and family – live or die. It affected everyone and we need to use every tool to protect people.”

Do you feel like you’ve just survived one of the most severe misinformation crises the world’s ever seen? Presciently, in 2019 the World Health Organization (WHO) listed vaccine hesitancy as one of the biggest threats to global health. Full Fact has focused on pandemic misinformation almost since WHO first conceded that that little flu had globe-trotting and life-ending capabilities.

Misinformation (false information spread regardless of agenda) and the more malicious disinformation (spread with the intent) don’t have to convince people to cause harm – they simply have to sew doubt, like getting people to baselessly question the efficacy of the Oxford vaccine. That’s enough to slow uptake and cause harm.

Organizations need a framework to preemptively tackle the sources of bad information, and no one source should police it, he says. Governments can be the biggest source of misinformation and have the most to gain from it. UK politicians have, Moy says, given good public health advice and misused statistics to suit their own needs.

Internet companies enjoy the engagement driven by this information but are uncomfortable policing what people can say and share – ditto the ISPs. Meanwhile, the media’s output was sometimes difficult to differentiate from harmful misinformation – you’ll have seen these ’lines’ run by top titles. The media has been urged by Full Fact this last year to make more than 100 corrections, says Moys.

“We had to improvise our response to the pandemic. Internet companies, governments, fact-checking organizations, civil society, the media and researchers all had to learn how to work together, share information and build boundaries and appropriate responses. In more mature fields, like disaster relief or information security, they have ways of measuring the severity of issues to dictate the responses."

The internet and mass proliferation of media has fragmented audiences. Some argue that we’ve seen a polarization. Some people who would have consumed trusted media a decade ago have diverged towards misinformation sources where their views entrench. Moy sees “blind cynicism” in everything – ”or, equally, blind faith in just the things you like the sound of, which are both corrosive to shared public life and democracy”.

Perhaps playing to the audience, Moy recalls the Bill Bernbach quote: “All of us who professionally use the mass media are the shapers of society. We can vulgarize that society. We can brutalize it. Or we can help lift it onto a higher level.”

Moy concludes that he’d like to see marketers consider the responsibility they have in shaping public debate. ”That’s not just advertisers and brand safety, but, for example, we had a public health scare about 5G partly because the telecoms industry failed to explain what it was doing to the public.”

But what about when ‘quality media crosses the line?’

Harriet Kingaby, the co-chair of the Conscious Advertising Network, helped develop a similar framework to Full Facts’, but for marketers. This coalition of more than 70 organizations launched in 2018 and has been educating the industry about how it can improve conduct in fraud, diversity, data consent, endangerment of children and the funding of hate speech and misinformation.

Kingaby points out that in an ad-funded web, big bucks from brands make a lot of disinformation sources profitable. Brands are awakening this. She urges advertisers to “treat advertising spend like a resource to incubate and fund a healthy internet”. This means defunding/blocking the “bad stuff” and including quality journalism with inclusion lists.

Now, not all journalism is born equal, and what passes as “quality” is open to debate. It’s a gray area. Some outlets delivering public service journalism are reliant on a regular foundation of page views from verticals, like the “structurally racist coverage of Meghan Markle”, says Kingaby. She is aware that “advertising has created incentives to produce content like that because it sells newspapers and delivers eyeballs”.

But for brands talking a big game on inclusion and diversity, does spending in media that argues against these ideals erode the sincerity of their initiatives? Does a couple of news stories featuring hate or misinformation sully a huge media brand? How much control should advertisers exhibit over media titles they buy in?

These are all questions marketers must ask themselves and Kingaby says it’s important that advertisers challenge these publications when they cross the line. “And if they pause spend [after a particularly controversial story], it is important to have a dialogue about why.”

Recent advertiser clashes have been painted as freedom of speech battles and interference. Kingaby has another take on it. “Freedom of speech doesn’t mean you get the automatic right to be paid for that speech. I can stand in a field and say whatever I like, but I don’t have the automatic right to be paid for saying it.”

The media world is taking notice, with Havas Media Group and GroupM recently joining the Conscious Advertising Network. If Havas talking about responsible media and GroupM’s development of an ethical buying tool are anything to go by, expect more attention to this space. And not just because it is the in thing. There’s a fear we’re still to see the worst repercussions of the misinformation boom.

Kingaby says: “If you turn a blind eye and keep fueling the beast, the beast can come back to bite you. We have a hugely polarized electorate and that’s not good for business.”

The dark side of brand safety is the fact that the tools defunded huge swathes of news media. Content about coronavirus and conversations around LGBT+ and BLM were demonetized. Some brand safety providers urged brands to avoid ‘bad news’, despite there being little evidence of any risk between being associated with bad news and brand damage. Other brands had blunt keyword protections baked in that often reflected overly-cautious fears and biases, harming the bottom line by limiting audience reach.

Kingaby urges advertisers not to be so risk-averse and not to just pull back from anything that is controversial. ”That was a really disturbing trend over that summer. Instead, be part of the news and ensure that there are informed, quality voices out there.”

There’s more to misinformation than tackling the lies. If the truth isn’t funded and accessible, the battle can’t be won.

Fake News Future of Media Business of Media

More from Fake News

View all

Trending

Industry insights

View all
Add your own content +