Child abuse videos found on Instagram’s IGTV - will advertisers avoid?

Children’s charity NSPCC supported the investigation / IGTV

Sexually suggestive videos of children and footage of a mutilated penis have been found on Instagram’s new longform video section IGTV – raising concerns over the platform’s vetting processes.

The content was found during an investigation by Business Insider into IGTV, which was launched by the social network in June, and involved the title setting up a dummy account posing as a child.

Based on the profile information, it said that IGTV’s algorithms served up a number of suggested videos including two showing young girls in "sexually suggestive" poses as well as other graphic and sexually-themed videos.

The videos were removed five days after initially being reported to Instagram.

Children’s charity NSPCC supported the investigation and said it was “another example of Instagram falling short by failing to remove content that breaches its own guidelines.”

An Instagram spokesperson said it had “zero tolerance for anyone sharing explicit images or images of child abuse”, adding that company has taken measures to proactively monitor potential violations of its community guidelines.

“Just like on the rest of Instagram, we encourage our community to report content that concerns them,” the spokesperson added.

Other platforms, including parent firm Facebook, Snapchat and YouTube have come under fire for failures in their moderation policies, however this expose marks the first time Instagram has found itself in the line of fire.

The unearthing of these videos within IGTV is particularly reminiscent of the multiple occasions where inappropriate, violent and child abuse content was discovered on YouTube, the platform IGTV was set up to rival. In wake of YouTube’s national-headline making troubles last year, advertisers on the platform (including HP, Cadbury, Adidas and Diageo) almost unanimously froze spend out of distrust of Google's content-vetting process.

It’s still early days for IGTV and doesn’t have a monitisation strategy in place for the videos, meaning that no advertisers will have been exposed to the content Business Insider found. However, some brands and publishers – including Netflix and BuzzFeed – have been experimenting with the platform natively since it launched, posting content to promote their products and editorial.

Instagram famously rolls out products to users before advertising is added and when it does, advertisers have tended to flock. Merkle recently reported that ad spend on it was growing at four-times the rate of investment in Facebook ads — with brands' Instagram spend up 177% year-on-year during Q2 of 2018 compared to Facebook’s 40% increase.

A fraught path to monetisation?

Paul Astbury, who leads specialist sales at Integral Ad Science (IAS) said this controversy is an early sign that just like other large social platforms, the sheer volume and speed at which content is created on IGTV makes it challenging to police. A point which could make it a more challenging sell if the time does monetise.

“Great strides have already been made when undertaking the moderation of content by humans, in fact both YouTube and Facebook are doing this but human eyeballs can only do so much when considering inventory at such scale," Astbury said.

"Any human content moderation needs to be coupled with third party verification that can take advantage of data modelling and machine learning.”

Facebook has vowed to double the number of people in their safety and security teams to 20,000 by the end of the year, including 7,500 who will review content.

Similarly, YouTube’s parent company Google has also increased the number of people it employs to remove violent and extremist content.

“While IGTV does not carry ads yet, this is still a broader concern that this kind of content appears on platforms – and we continue to urge application of third-party research to detect and block this kind of harmful content,” said Bethan Crockett, senior director of brand safety and digital risk for WPP’s media agency GroupM.

GroupM told The Drum earlier this year that it has been working with clients on social content strategies for IGTV, testing organic posts and programming distributed via brands' own accounts.

Crockett continued: “The continuous development of policy, technology and human review procedures to make their user-generated content environments safer is of crucial importance. In the broader brand safety context, when content is monetized, to protect clients, we implement all brand safety settings made available by social platforms and have actively called for the urgent application of third party verification to make available additional layers of protection to keep brands away from objectionable content.”

Astbury went on to explain that social platforms have already begun to open up their inventory to third-party measurement and verification to promote greater transparency, but guarantees of brand safety will need to be a priority.

"Traditionally, their focus has been on viewability and identifying fraudulent activity," he said. "Brand safety is the next logical step for these social platforms to take in ensuring their content provides a suitable environment for major brands."

However, Sam Scott, marketing consultant and tech columnist for The Drum said if Instagram were to monetise IGTV, advertisers looking to ensure brand safety should create a comprehensive whitelist of the only channels or users on which their ads can appear.

“IGTV will face the similar issues [to YouTube] for one simple reason: algorithms, not human beings, will make the decisions. And algorithms can be amazingly dumb and will always have the biases of the people who wrote them,” he said.

“[Whitelists] will take a lot of research done by actual human beings, but the best results never come from letting machines do the work for you."

The Drum has asked a number of advertisers if this would affect the decision to invest in Instagram IGTV platform in the future but was awaiting comment.

The investigation comes as platforms are under increasing scrutiny by government regulators. Earlier this week, Ofcom announced it would lead an unprecedented coordinated clampdown against harmful content hosted by social media firms. It conducted a survey of 1,700 people aged 16 and above which found that 52% of the public backed tighter regulations of social media.

Join us, it's free.

Become a member to get access to:

  • Exclusive Content
  • Daily and specialised newsletters
  • Research and analysis

Join us, it’s free.

Want to read this article and others just like it? All you need to do is become a member of The Drum. Basic membership is quick, free and you will be able to receive daily news updates.