Media

Social media managers weigh in on regulating hate speech online

Author

By Amy Houston, Senior Reporter

May 21, 2021 | 5 min read

The Drum’s social media executive Amy Houston shares her thoughts on what social media teams need from platforms like Facebook in order to regulate hate speech.

Starbucks

What regulations do social media teams want to see from Facebook in combating hate speech?

Leaked emails that surfaced last week have suggested Starbucks is considering leaving Facebook over hate-speech-related comments. If the coffee giant follows through on its threats this will be a huge blow to the social network, with other brands likely to follow suit.

It’s a massive issue, and begs the question – what do social media teams want and need to see from Facebook and co to get a handle on this?

One thing that would benefit social media teams greatly, in my opinion, is if the platforms were more transparent in their approach to content auditing, and potentially used more human moderators.

The need for social platforms to be more upfront and honest seems to be a key issue within the industry. I spoke to Chloe Hunt, marketing assistant at Beauty Bay, who said that platforms need to do more to promote the services they have, such as Instagram’s manual comments filter. “Us as a brand have been able to input the most hateful words we used to see on a daily basis, and we are already feeling such a positive effect of just not reading those comments day to day. I feel the platforms should promote these hacks a little bit more, as it could save somebody from being pushed over the edge,” she noted.

A recent report from Glaad, a non-governmental media monitoring organization, which assessed social media safety, states: “Surveying the current landscape of leading social media platforms, the entire sector is effectively unsafe for LGBTQ users.” It also recognizes that there is an immediate need “to push major social media platforms to make their products safer”.

Facebook has an estimated 1.9 billion daily active users, so it cannot realistically moderate everything, but for the platform to be deemed unsafe for certain users is staggering. Social media teams and users alike need to feel safe when publishing content. Online hate has the potential to turn into real-world violence and social networks need to be held accountable.

It’s obvious that many social media networks need to be more efficient in removing harmful content and stop allowing the algorithms to fuel negativity. Getting to the root of this and confronting bias in AI would be a step in the right direction.

Part of a social media manager’s job is to listen and monitor certain topics or trends within the industry. Do the big social networks have a good handle on this? In my opinion, they need to get better at monitoring misinformation, restricting harmful hashtags and auditing ads.

Over the coming weeks it will be interesting to see how the situation between Starbucks and Facebook develops and if the fallout will prompt social networks to listen to what social media teams have been asking of them for years.

I asked The Drum’s social media community for their thoughts on this topic, and you can read what they had to say below. If you would like to join in this conversation, use the hashtag #TheDrumSocial.

Media

More from Media

View all

Trending

Industry insights

View all
Add your own content +