This month many of the world's largest brands have opted to pause their advertising spend on social media brands, particularly Facebook, in support of a call for them to tackle the spread of misinformation on the site. The proliferation of misinformation and fake news is a continually dangerous trend that the sector has long been accused of not doing enough to counter. Chris Walts, social strategy lead for Ogilvy offers his own views and examples of how the issue could be tackled.
We’ve succeeded in creating the most connected society in history; over 4bn of the 5.5bn adults on earth own a smartphone. The difficultly is, connecting everyone includes connecting the ‘bad people’. – Benedict Evans
Debates are raging around social media boycotts, algorithmic biases, and content moderation. While most people seem to agree that they want ‘bad content’ removed, it’s less clear what ‘bad’ actually is and what the consequence of that removal would be. Clearly things need to change, and systemic reforms are needed yet the problem is, we’re all debating the wrong issue. We need to stop arguing about freedom of speech vs. content moderation. The real problem is freedom of reach.
Freedom of speech in social media
It’s easy to say there should be more content moderation, but determining what should be taken down is far more complicated. Social networks offer a mixture of publishing options and different distribution models: there’s advertising, recommendation engines, public feeds, stories, groups, private feeds, group messages, and one to one chat – Benedict Evans
Any conversation about moderation needs to include which distribution method should be moderated. Should, for instance, individuals be able to say whatever they want to a friend but not a group of friends or to the rest of the world? Where the lines get drawn are complex and incredibly important.
Any talk of content moderation naturally leads to a discussion around freedom of speech. However, freedom of speech has never been simple and has always had limitations. This stems from the fact different liberal democracies have widely varying attitudes on who individuals should have the right to offend with their speech.
The US, for example, feels very different about freedom of speech around religion, minorities, and sexual exploitation than say Japan or India (Pew Global Attitudes Survey). Determining who gets to define the boundaries of freedom of speech is paramount and is difficult to regulate and enforce on a country by country level. A global solution is needed, but that too has its own set of problems. As Benedict Evans says, “global regulatory solutions could force platforms to regulate at the lowest common denominator, which would mean the strictest rules.” This could lead to a country like Myanmar’s rules on freedom of speech being applied to the entire world, which isn’t an acceptable solution.
Addressing the real issue
While the discussions around freedom of speech and moderation levels are important, they completely miss the new technological element social media platforms have brought to public discourse: free amplification. Anyone, from anywhere, can now reach a global audience in minutes. The amplification effects of social media have redefined people’s access to information in a way that hasn’t been felt since the printing press. Even the printing press isn’t really a fair comparison, as social media has essentially made the printing press free at point of access and given its users the ability to post its outputs directly to anyone in the world at any time.
The issue that needs to be addressed isn’t freedom of speech that’s we’ve had for decades, it’s freedom of reach.
As Aza Raskin explains, “We are guaranteed the right to freedom of speech. We are not guaranteed the right to freedom of reach. We need amplification liability for internet platforms.”
Casey Newton expands on the issue further:
“Freedom of reach is arguably the question this year for platforms reckoning with their potential culpability in the erosion of democratic norms and the promotion of state violence. It’s what separates them from normal publishers, to which they are constantly comparing themselves…”
Freedom of reach poses a different set of questions for platform policy teams and executives to think through. It asks in what ways a product can be exploited, wittingly or unwittingly, to recruit new followers for a person or an ideology — and whether the company feels comfortable with granting an account those privileges.
“What Facebook group you’re encouraged to join is a freedom-of-reach question. Which YouTube video gets recommended is a freedom-of-reach question. Which Twitter account you’re told to follow is a freedom-of-reach question. And who shows up in Snapchat Discover as a suggested follow is most definitely a freedom-of-reach question”, says Casey Newton
“It’s important then to understand what makes freedom of reach so different on social platforms from traditional media outlets. While traditional media outlets do amplify negative stories or points of view simply by choosing to talk about them, they also (can) discuss the subjects and add context, history and rebuttals. In contrast, the content on social platforms that gets amplified is simply the original content itself,” Newton continues.
The loss of context, coupled with social media recommendation algorithms, creates an echo chamber that continues to reinforce people’s views. This might be less of an issue if the all of the content being created was factually correct, but often the most shared stories are fake or inaccurate.
To combat ‘fake news’ many social platforms are now starting to include disclaimers and fact-checks on controversial subjects. While these steps are signs of progress and done with positive intention, it turns out people don’t often care that the post contains false information if it helps reinforce their world view. Darren Linvill, a professor at Clemson University who researches social media disinformation has found that, the goal of misinformation isn’t about trying to persuade people to adapt a new view, it’s about “trying to reinforce existing beliefs and get people more entrenched in those beliefs. The more entrenched we are, the less possible it is to agree with the other side,” according to The Washington Post.
This means all of the disclaimers and counterpoints in the world might not make any difference. Furthermore, the content in these situations might not even be what most people consider ‘bad’. It could just be misleading information or a lightly doctored image or video – something that wouldn’t violate most moderation policies. The problem is not the content or speech itself, but how easily it transmits.
Building a better social future
We need to ask ourselves if it’s right and fair, that someone with 100 followers can instantly be seen by millions of people. Should there not at least be some initial limitations placed on publishing content to social platforms to curtail its reach and distribution?
Careful consideration clearly needs to be given to the issue to avoid silencing already marginalised voices and legitimate protest and journalism, but there are systems that could be in place to minimise the risks. For instance, accounts over a certain age, or content that has been reshared by a reputable source could have their reach limitations removed. Or perhaps post reach becomes limited to only grow at a rate relative to people’s followers. The goal isn’t to silence or stop people from being able to connect around the world, simply to slow the spread of information so it doesn’t propagate unchecked.
Many social platforms have taken steps towards content moderation and in some instances the solutions are becoming robust; Facebook, for instance, says it removed 9.6m pieces of hate speech in Q1 of 202, but all platforms continue to avoid the underlying issue around freedom of reach. The conversation needs to pull a Silicon Valley pivot to break the echo chamber and create change.
While the current advertiser social media boycott may have some effect, there are questions about its authenticity as the Verge points out, “Going on Twitter to say ‘Facebook should do better,’ and collecting your retweets and getting a nice news story out of it while saving some money in the process... however, it’s trying to solve the wrong problem.”
The debate needs to shift to exploring the issues of freedom of reach and the reforms regulations required for a better social future. Critically, the regulation can’t come from the platforms themselves, as their progress will only ever go so far. It needs to come from governments and global governing bodies who look at the wider societal impacts and unintended consequences.
When we finally do start having the much need discussions, dialogue and debate around building a better social future we need to ensure social media itself is not painted as the problem. It’s not the technology’s fault humans have issues we’re still struggling to address. We need to continue to remember that social media has democratised information and access in ways that were previously impossible. It’s given a voice to those who had never been heard, helped topple governments and shed light on atrocities and helped us all remember our friends’ birthdays. We can’t lose the new connective tissue social media has helped bring to the world. It’s too important. That’s why we need to start talking about the real issues. Content isn’t the problem; it’s how easily it spreads around the world.