Free speech or online harm? The responsibilities of social platforms
Recently, Twitter took the unprecedented decision to add a warning label to some of Donald Trump’s tweets. This, naturally led to a lot of anger from the president, and a host of questions about the role of social media platforms as arbiters of truth.
Trump has benefited from ranting on Twitter since his presidency campaign first began, and has long been given a free pass in the name of freedom of speech. Twitter’s position, like that of other social media giants, was clear from the beginning: if they intervened, they’d be ruining peoples’ right to freedom of expression. Recently however, they marked Trump’s tweets about mail-in-ballots with a fact-check warning and hid them from view. People still had the option to view the tweets, but had to go past the warning first.
Some have asked why they didn’t just delete it outright, but last year, they had announced that they would not delete posts by public figures, but would add a warning label instead. This is the first time they’ve exercised their new policy. Before this, social media platforms had taken a much more passive role in the post-truth era. They had a role of curating content and stepping away from bearing any responsibility around how that content is perceived and consumed.
Indeed Facebook’s Mark Zuckerberg has come out and said that their policies “have distinguished [them] from some of the other tech companies in terms of being stronger on free expression and giving people a voice”.
The saga continued as Twitter marked more of Trump’s tweets about the protests in Minneapolis as “glorifying violence”. This has fueled him to sign an executive order limiting the protection social media platforms have when blocking content that is objectionable. This is a whole new can of worms, as they’re now no longer just fact-checking they are now also checking public figures’ content for the impact it will have.
Was Twitter right in fact-checking Trump's tweets? As always, the answer isn’t really clear cut. This takes us back to the fundamental question of what exactly is the role of social media companies? Are they simply open curators that cannot be held accountable for what users post? Or should they bear more responsibility in the role the content hosted by these websites has in inciting violence, affecting behaviour and even in impacting democracy? This is a decision Twitter cannot back away from now and it’s a slippery slope. Social media – like traditional media – does impact societies and does impact democracy. It is far too idealistic to think that people won’t be impacted by what they consume online, especially from powerful public figures.
Indeed Trump’s (and even some other public figures’) posts are often not wholly accurate and the public has a right to not to be fooled. That said, placing your faith in a social media platform whose goal is profit is a bit murky. The very decision to have an algorithm, which is designed to show people what content they ought to see and engage with also impacts users’ worldviews and in turn how they vote too. We are now likely to see more heat placed on other platforms, such as Facebook, and what they will do to fact-check and impact-check their content. Inevitably this will eventually have ramifications outside of just politics, and will impact how other public figures and even brands think about the impact their content will have.
All in all, these are crucial discussions to be having. I’m just surprised it has taken so long for social media companies to get involved with them.
Zahra Hasan, strategist, Wilderness
Content by The Drum Network member:
Working at the intersection of community and commerce we embed brands in online culture through social-first thinking.Find out more