TikTok Social Media Football

Do social media platforms need to adopt a human-focused moderation system to tackle abuse?

Author

By Amy Houston, Senior Reporter

July 16, 2021 | 5 min read

The Drum’s social media executive Amy Houston explores if social networks need to adopt a more human approach to content moderation, and why they aren't moving fast enough to combat racism online.

football

Three Black fooballers were targeted with online racist abuse / Joshua Hoehne/ Unsplash

All eyes are once again on social media networks this week after the disgusting display of online racism aimed at three Black footballers – Marcus Rashford, Bukayo Saka and Jadon Sancho – after England’s Euros defeat last week.

Statements from Facebook (which owns Instagram) and Twitter have condemned racial abuse, but the words and actions of these Silicon Valley giants aren’t comparable.

Digital abuse is not new. A 2020 report from the Professional Footballers’ Association found that 43% of Premier League players in the study experienced targeted and explicitly racist abuse, and that 29% of the abuse came in emoji form. The study also found that “Twitter’s algorithms were not effectively intercepting racially abusive posts that were sent using emojis”.

At the time England and Manchester City forward Raheem Sterling said: “I don’t know how many times I need to say this, but football and the social media platforms need to step up, show real leadership and take proper action in tackling online abuse. The technology is there to make a difference, but I’m increasingly questioning if there is the will.”

This begs the question: do social media platforms need to move toward a more human approach to moderation? Algorithms aren’t keeping up with the speed at which our language evolves and Matthew Cook, culture and brand lead, Gravity Road, says: “Human moderation can seem like the simple answer, however there is a huge emotional burden on moderators to sift through offensive and traumatic material. We end up shifting the human cost and putting it out of sight.”

“It comes down to a question for society: what do we want from the technology, and what are we not willing to give up for it?” Tech will always play a role in moderation, but maybe “we need a combination of increased human moderation overseeing better tech,” he adds.

The topic of content moderation has been debated for years, with many users left feeling that social networks have been extremely slow in putting genuinely meaningful practices in place. An open letter penned this week by advertisers and brands, led by the Conscious Advertising Network, has condemned how top social networks enable racist abuse.

Human interaction “will always be the key,” says Aaron Seale, senior director of creative content at Jellyfish Social. “But as platforms increase the opportunities for users to create content and have a voice, it’s unimaginable how much content these moderators would need to review.”

He notes that “viewing all these messages constantly will no doubt create burnout emotionally and physically,” and another potential solution is getting social networks to focus more on education and “invest in services on a curriculum level, through early years to early adult”.

A petition that’s been shared widely online is calling for verified ID to be a requirement for opening a social media account as a way to combat anonymity being weaponized to spread hate. In theory, this sounds like a good solution, but it could potentially harm the people it’s trying to help. By demanding that people use government-issued identification to open a social account, it could alienate large groups of people who don’t have access or means to ID. In countries where talking openly about your sexual orientation or the government online can get you arrested, “protecting anonymity is crucial to freedom of speech,” says Cook.

Are newer social media platforms such as TikTok making more progress in content moderation? In a statement on July 9 Eric Han, head of US safety, TikTok, said the video-sharing app was advancing its approach to user safety and added that while no technology “can be completely accurate in moderating content, where decisions often require a high degree of context or nuance, we’ll keep improving the precision of our technology to minimize incorrect removals”.

While no social media platform can be perfect, “we only need to look at Black TikTok creators feeling suppressed in the wake of Black Lives Matter last year or the recent strike to protest appropriating Black creativity to see TikTok isn’t doing enough,” Essi Nurminen, head of strategy, Born Social, tells me.

Many social media users have noted that when attempting to flag racially abusive comments on the pages of the three footballers, they were met with generic responses from Instagram stating that the platform is dealing with high levels of reports and is unable to review them all. However, it’s not only social networks that should be doing more, “but the government, and we as a society,” Nurminen concludes.

Let’s hope this starts to ring true.

TikTok Social Media Football

More from TikTok

View all

Trending

Industry insights

View all
Add your own content +