England’s reaching the final of the Euros was dampened by the horrific racist abuse directed at footballers on social media, prompting a coalition of advertisers and brands to demand social platforms do better. The response has been lukewarm. Mobbie Nazir, chief strategy officer at We Are Social wonders whether social platforms will do what needs to be done to combat hate speech.
This week, We Are Social was one of the signatories of the Conscious Advertising Network’s (CAN) open letter to the leaders of the major social media platforms.
The letter called out the lack of adequate action that has led to increased levels of toxicity on social and demanded improved policing of platforms. This comes off the back of a long and chequered history of hate speech on social media and, more recently, the horrific racist abuse we’ve seen directed at England’s inspirational football team.
For people impacted by racial hate, abuse like this is nothing new, both on social and in their offline lives. In some ways, it’s exhausting to see how shocked people are by the incidents of the past few weeks. And hate speech on social media has been rife for years. Back in 2018, we issued a report called Braving the Backlash which showed brands how to tackle it. But in this year’s summer of sport, the shocking levels of racist conversation have thrust the issue back into the spotlight.
Racism and sport have become two words that are used together on a depressingly regular basis. Where racist abuse was once largely directed from the stands, social platforms have placed players in its direct line 24/7. And it’s not just an issue of race. A report by GLAAD this year stated that staying safe on social media poses a significant problem for the LGBT+ community, with 64% experiencing harassment and hate speech. Women, those with disabilities – there are many groups who are targeted by trolls simply for existing.
As a socially led agency, we’re passionate about social. We believe that it has the power to do good, to connect people all over the world in a way that no other medium can. And we’ve seen an outpouring of positivity on social (as in real life) drown out much of the hate in recent days. But the long-term answer to tackling negativity on social media shouldn’t rely on other people’s goodwill. Platforms need a significant shake-up.
Boycotts from the ad industry aimed at encouraging platforms to do better have so far had little impact. Social platforms have been resilient in withstanding criticism and, to date, no boycott has given them any real cause for concern. Thousands of companies boycotted Facebook in 2020, but revenue continued to climb significantly, as many advertisers simply shifted paid media to platforms like Instagram (remaining within the Facebook family). When it comes down to it, advertisers want to go where their audience is – and audiences are on social media.
Social platforms do have anti-hate speech policies in place. Facebook this week issued a statement that provided an extensive list of the tools it uses to tackle hate speech online-focused mainly on message and comment controls. Twitter is less focused on tools and more about what you can and can’t say. But despite these efforts, content is breaking through.
It’s clear more is needed. More proactive content moderation to root out negative or hateful posts before they show up. Greater transparency into the process on what can or can’t be on the platform. Outrage fuels the business model of social media; platforms need to review and improve their algorithms, the content, and the way they monetize their services. We need to seriously examine whether compulsory ID and a lack of anonymity on social media would help curb hate speech, or whether the inclusivity and privacy challenges posed are too great.
Action has to come soon. Social platforms might be private companies, but they also have a public responsibility. They are becoming inextricably intertwined with our daily lives, whether it’s the way we connect with friends, to signing into websites and apps, to shopping and making payments. This is where they want to be - life is locked up via social. Becoming state-level actors like this means they now have huge power and are in need of better regulation.
The last few years have shown that relying on social media platforms to self-regulate isn’t working. We can add pressure as an industry, and on a more granular level, we can work with our clients to combat hate speech when directed at the communities they operate in. But ultimately this is putting a plaster on a gaping wound. To truly change, social platforms need regulation, with real consequences for those who don’t comply. Former Twitter EMEA vice-president Bruce Daisley posed an interesting solution to this of a public ombudsman, funded by social platforms through the form of a compulsory payment, who can deal with individual issues. Social media needs to be policed as rigorously as the press.
We also have to accept that hate speech is a societal problem. Many people in protected bubbles will have never been exposed to racist comments or language – many others will have experienced this their whole lives, online and off. Social media amplifies issues that are endemic in society and eliminating racism and hate on social media does not complete the job. There’s a lot more we need to do and can take responsibility for as an industry when it comes to changing people’s attitudes towards inclusion.
Social media has the potential to be the biggest force for good that media has to offer. We need real action now to prevent it from drowning in a sea of hate.
Mobbie Nazir is chief strategy officer at We Are Social