Feature

Fighting the good fight: how media organisations are hitting back at online propaganda

Misinformation, whether employed by nation states, marketers or shadowy organizations with a darker agenda, distorts our view of the world. But the solution requires a multi-tiered approach, from encouraging platforms to clamp down, to supporting respected news outlets as they seek to publish the truth.

The pen is mightier than the sword, or so goes the 180-year-old saying coined by English writer Edward Bulwer-Lytton in his play Cardinal Richelieu. He follows this thought with the less-quoted refrain, ‘Take away the sword; states can be saved without it!’

By Bulwer-Lytton’s logic, the mighty pen can dismantle nations. It’s hardly a stretch then to apply a slightly more modern spin and say that Dictionary.com’s 2018 Word of the Year – 'misinformation' – poses just as great a threat.

Fake news erodes reality. When falsehoods fly, alternative facts hijack discourse – and, in essence, history, which is written by the victor.

From their information war rooms – which are, sometimes, literally war rooms – the media, marketers, PRs, tech platforms and the state can distort how the world is perceived by the public.

A state of misinformation

The British Army may have long since traded in its swords for far deadlier weaponry, but still it wields the pen and employs journalists, photographers and marketers to shape favored narratives.

BBC journalist Christian Hill is a former troop commander in the 3rd Regiment Royal Horse Artillery. He was then embedded with soldiers as part of the British Army’s Media Ops unit in Afghanistan, contributing to its marketing and media engine as a combat cameraman and “promoting” the war in a role that carried the inherent friction of duty to his country and reporting the facts – especially when the conflict netted 46 UK solider fatalities in 2011.

“The British military’s media operations have always avoided misinformation or ‘black propaganda’,” he tells The Drum. “We would never go so far as to lie by omission.” Sometimes, however, he says, the focus was narrow. “Maybe the failure to highlight the bigger picture is, in itself, a distortion, although I would disagree. It was a form of PR for military operation – a fine line, admittedly, but not propaganda.”

The message was to show ‘all was well’ in Helmand, incorporating testimony from allies and indigenous peoples. Ideally this was echoed by the media – when they would accept his material. “Some outlets were a little wary of combat camera team content, fearing a loss of impartiality if they were seen to be broadcasting footage and stills that could be perceived as army propaganda.” Smaller titles were more accommodating if the unit or personnel were local. Or if the content was impressive enough.

The Media Ops unit has since been absorbed into the shadowy 77th Brigade which, according to its website, practices “modern warfare using non-lethal engagement and legitimate non-military levers”. It is not alone.

In 2017, Russia openly launched an information warfare task force. Nato and bordering nations then anticipated a barrage of ‘hostile propaganda’. They didn’t have to wait long. In 2018, a Russian-penned fake news story accused German soldiers of raping a woman in Lithuania during a military exercise. The story was debunked, but not before spreading widely.

Hill’s media ops unit was merged with the 77th to form an outfit capable of delivering content, digital ops and sentiment analysis like a modern, in-house marketing team. But this, he says, strengthens the accusations that combat cam content is propaganda. Since resigning in 2014, he notes that social media has reportedly swayed western elections and referendums. Now the army is increasingly reliant on it for direct-to-audience communications.

“The growth of social media has fueled the rise in fake news,” he says, “providing multiple platforms for disinformation.

“Encouraging the likes of Facebook to clamp down on the problem is obviously a start, but the west also needs to protect the reputation of respected news outlets.”

The 77th could strengthen its ability to counteract disinformation from foreign entities if it championed media interactions again. “In this game, credibility is king,” Hill concludes.

Media fights back

Jamie Angus, who is the director of BBC World Service Group, says fake news and misinformation is causing “real-world consequences,” and that there are “half a dozen examples around the world where people have lost their lives”.

BBC World Service is leveraging its weekly global audience (346 million people across 42 languages) to create a framework to combat fake news. And with fake news driving atrocities in Myanmar, so-called WhatsApp murders in India and ethnically-motivated conflict in Nigeria, Nobel laureate Wole Soyinka told a recent BBC event in Nigeria that, “if we’re not careful,” it will be the trigger of World War III – and, he says, “that fake news will probably be generated by a Nigerian”.

Facebook and Twitter have been criticized for lack of action, but Angus worries that encrypted messaging services like WhatsApp are the hotbeds of misinformation. “These features are particularly liable to the spread of low quality (or fake) news and disinformation. It is unsearchable. Facebook and Twitter are indexable and can be searched by machines, but chat is unlike that. It is only when it is too late that you find out a particularly toxic piece of fake news has been shared in chat.

The solution is embedding trusted broadcasters like the BBC in these apps, says Angus. He proposes partnering with WhatsApp during elections to establish irrefutable facts and improve the health of debate. “It is a complex issue and there needs to be action on global media literacy, cooperation and collaboration between quality news publishers and tech platforms to ensure the best stuff floats to the top.”

Regulation, he says, should be a final resort if broadcasters and platforms fail to come to a resolution. “Journalists want to see a news economy that raises up the good to the detriment of the bad. I remain quite optimistic that, over time, we will get to a better place.”

Message in a bot

Professor Filippo Menczer studies Twitter botnets at the Indiana University Network Science Institute. One of his reports looked at how Twitter bots spread misinformation during the 2016 US presidential election.

His team analyzed 14m tweets between May 2016 and March 2017 and found that only 6% of studied accounts were bots. Those 6% were responsible for sharing 31% of all “low-credibility” information, however, and he says this was widely used to “manipulate or guide narratives”.

Humans were equally as likely to retweet bots as human accounts, Menczer says. “It is not difficult to program deceptive bots that are difficult for humans and machine learning algorithms to detect.”

For a bot to be effective, it needs real humans to amplify its message. Furthermore, the research found that most bots were tooled for business rather than for political purposes. “There is anecdotal evidence of bots being used to manipulate financial markets, especially cryptocurrencies. Spam was one of the first applications of bots and is still something platforms spend a lot of resource combating.

“Bots distort our view of what is important, what is popular and what people believe. They manipulate our cognitive and social mechanisms which respond to these cues and affect our own opinions and behaviors. If you eliminate or decrease the distortion, then our opinions will be based on a more realistic view of the world.”

A web of lies

Wikipedia is the fifth most visited website in the world. The ad-free encyclopedia is burdened with documenting facts in an ad-funded media landscape that does not necessarily value truthfulness as much as clicks. Boasting 48m articles and available in 300 languages, there are 1,200 voluntary gatekeepers protecting the integrity of its English-language pages alone.

John Lubbock, communications coordinator of the UK chapter of Wikimedia which supports Wikipedia and its sister projects, admits that “wars between nationalists” often play out on the site. Turkish and Kurd admins have clashed over the nationality of musician Ahmet Kaya and there have been disputes about whether Carles Puigdemont is a Spanish or Catalan MP. The site was recently banned by Venezuela and is approaching a second year absent in Turkey.

After two nationwide bans, in 2017 the Russian government instead enlisted young citizens for “patriotic editing” to populate Wikipedia with “truthful information about achievements and deeds of the Russian people”. Lubbock admits social networks are proving to be softer targets for misinformation.

Shamed and disbanded PR firm Bell Pottinger was offering Wikipedia edits half a decade ago and concerns emerged that it would try to cover up the human rights abuses of clients, with Wikipedia co-founder Jimmy Wales describing the PR agency as “ethically blind”. Lubbock doubts that these “dark arts” would have been successful, saying it is very difficult to whitewash high-traffic, English-language Wikis.

“There are eagle-eyed editors who are interested in pages about the human rights breaches of a dictatorships. It is reasonably hard to bias any page and any agency offering that service is running a scam. They can’t guarantee that they won’t get found out or have it changed back.”

Bans are issued to Wiki defacers, but willful abuse and ignorance of the rules are two different things. “It can be problematic. People write about things they are often closely connected to and they are not always the most subjective people. The issue of getting people to edit in the right way will always be a problem because we don’t have the money that Facebook (and Instagram) or Google (and YouTube) has.

Again, it comes back to the digital ad duopoly. They rely upon Wikipedia – Facebook to rate publisher reliability in its NewsFeed (which has angered Breitbart) and YouTube to contextualize conspiracy theory videos on subjects like chemtrails and the Holocaust. Lubbock says that by doing so they are “abdicating responsibility” and placing the burden of proof on the site.

And again, it comes back to finding a sustainable model to support Wikipedia and media companies in their mission to pen, as best they can, a ledger of truth.

This feature first appeared in the cyberwarfare issue of The Drum magazine. In it, we take a look at the role of our industry in a world where humdrum technology and everyday communication have become weaponised, from our smart homes being hacked and our fridges held to ransom to fake news and deepfakes having far-reaching ramifications for global politics. You can buy your copy here. You can buy your copy here.

Come on in, it’s free.

This isn’t a paywall. It’s a freewall. We don’t want to get in the way of what you came here for, so this will only take a few seconds.