TikTok’s fledgling ad offering has been dealt a blow after a BBC investigation found it did not suspend the accounts of people sending sexual messages to children via the video sharing platform.
The report, which calls into question TikTok's brand safety credentials, comes just three days after the company announced it was partnering with not-for-profit organisation Internet Matters to underscore its commitment to internet safety.
The BBC’s Trending team collected “hundreds” of sexual comments posted on videos uploaded by teenagers and children as part of the investigation, according to a statement.
The comments were reported via TikTok's user interface and were subsequently removed them from the platform. However, their authors were allowed to remain on the site, according to the BBC.
The broadcaster’s reporters were able to identify a number of users who repeatedly asked teenage girls to post sexually explicit messages. They also uncovered several accounts being run by children as young as nine years old, despite the site’s age gate of 13 and above.
The BBC said TikTok has no plans to start verifying user's age anywhere outside of the United States, where it was hit by a $5.7m Federal Trade Commission fine after it was found to have illegally gathered the personal data of children aged 13 and under.
The app does not currently have a concrete ad model in place, however it has been in discussions with US agency partners regarding plans to open up self-managed, biddable platform, according to Adweek.
A rogue ad for food delivery company GrubHub was also spotted in January, according to Digiday, suggesting the platform is in the process of testing formats.
In response to the investigation, a TikTok spokesperson said it was its committed to enhancing its existing measures, which comprise"a combination of policies, technologies, and moderation strategies", and introducing additional technical and moderation processes.
They wrote: "We have a dedicated and growing team of human moderators to manually cross review tens of thousands of videos and accounts, and we constantly roll out internal trainings and processes to improve moderation accuracy and efficiency. While these protections won't catch all instances of misuse, we're committed to improving and enhancing our protective measures, and we use learnings like these to continually hone our moderation efforts.
"Together with our industry peers, we participate in conversation with experts and third-party organisations to explore future solutions to address this challenge. We care deeply about the safety of our users and we will continue to work hard on this front. We welcome suggestions from media and third-party organisations like the BBC."
The BBC report comes as advertisers continue to battle the duopoly over issues of brand safety. Last month saw YouTube fleetingly lose a number of its advertiser partners following reports of pedophilic comments being published underneath videos of children, while Facebook found itself under fire for hosting the livestreamed video of the Christchurch massacre in New Zealand.
The World Federation of Advertisers subsequently urged brands to put pressure on all social media platforms to do better.
TikTok will now find itself faced with the same pressure before even officially opening itself up as a partner.