Bots and rogue advertising are being used to manipulate social media. It’s why both must face greater scrutiny.
The networks and technology that promised to democratise media and improve public discourse are creating threats to the democratic process.
New research is shining a spotlight on this issue and shows how little we know about how our behaviour is being shaped. Evidence emerging following the EU referendum in June 2016 and the US election in November 2016 points to the need for stronger governance on social media platforms.
In my view, social media needs to be held accountable to the same rules and regulation that apply to mainstream media. This should apply to advertising, media law and in particular the democratic process.
The promise of social media was that we’d be able to connect with each other and have an equal voice in public discourse. Facebook has achieved the reach. It connects 43 million voting UK citizens via its mobile app and web site. It’s a very intimate form of media that reaches us via our devices reinforced by the signals from the people in our networks. The platform is incredibly powerful but you’d struggle to make a case that it has helped improve the democratic process. In fact, I’d argue the opposite.
Bias in newsfeeds and communities
Facebook’s news algorithm reinforces our existing biases. We’re presented with content based on the signals that we share with the platform.
Head to your Facebook feed right now and try and find an article that challenges your political opinion. You’ll struggle. Newsfeeds create media bubbles. It’s why so many people in the UK misjudged the outcome of the EU referendum.
Meanwhile Facebook’s networks enable like-minded people to join together in groups or communities around issues and topics. Pro-Brexit supporters huddle with pro-Brexit supporters and remainers with remainers.
There are no shared spaces.
Media bias doesn’t stop here
The arguments around fake news are well rehearsed. There are two main purposes for fake news. The first is propaganda and the second is profiteering.
Meanwhile the disclosure of paid influencers is an issue that advertising regulators are starting to address.
Now research shows that public discourse is further biased by advertising and bots. Evidence due to be presented to US Congress today by Facebook, Google and Twitter will show the extent to which overseas nationals attempted to interfere with the US election. The New York Times reports that US voters may have seen up to 126m ads during the US election originating from Russia. Meanwhile Twitter has identified 2,752 accounts and more than 36,000 bots which tweeted 1.4m times.
Disclosure from platforms and academic research
Google said in a blog yesterday that it has found evidence of efforts to misuse its platforms during the 2016 US election by actors linked to the Internet Research Agency in Russia. It uncovered 1,108 videos with 43 hours of content on YouTube.
The issue isn’t unique to the US election.
Researchers at City, University of London uncovered more than 13,000 suspected bots that tweeted predominantly pro-Brexit messages during the EU referendum. In a paper published in Social Science Computer Review, the team said that the network vanished within weeks of the vote.
My view is that we’ve only just started to scratch the tip of the iceberg in understanding how social media can be manipulated. We need greater disclosure from social media platform, more academic research, and the introduction of legislation to start to deal with this issue.
Stephen Waddington is chief engagement officer at Ketchum and visiting professor in practice at Newcastle University. He tweets @wadds