After a “difficult year” for its video-sharing site YouTube, Google has unveiled a host of brand safety assurances as part of a concerted effort to lessen concerns raised by successive ad misplacement controversies over the past 12 months.
Senior Google executives Neal Mohan, chief product officer, Robert Kyncl, chief business officer, plus Paul Muret, vice president, display, video & analytics, today (16 January) acknowledged they “need to do more”, and jointly unveiled a multi-tiered update to YouTube’s approach to protecting marketers from having their ads served against inappropriate content.
This includes: stricter criteria for content monetization on YouTube; manual reviews on Google Preferred; greater controls for advertisers on what they deem as ‘appropriate content’; plus greater protection for YouTube creators.
A new approach to YouTube monetization
The renewed update now means that content creators must now have 1,000 subscribers plus 4,000 hours of watch time over the past 12 months – in addition to the earlier 10,000 total views qualification – in order to qualify for the YouTube Partner Program (YPP); which in-turn lets them monetize their content.
Under the new terms, if an account has been issued three community guidelines strikes, YouTube will remove that user’s accounts and channels from the video-sharing site, according to Muret’s blog post.
He added: “Of course, size alone is not enough to determine whether a channel is suitable for advertising. We will closely monitor signals like community strikes, spam, and other abuse flags to ensure they comply with our policies.
“Both new and existing YPP channels will be automatically evaluated under this strict criteria and if we find a channel repeatedly or egregiously violates our community guidelines, we will remove that channel from YPP.”
Manual reviews for Google Preferred, plus greater controls over ad placement
As part of the update, all channels included in Google Preferred will be manually curated and ads will only run on videos that have been verified to meet its ad-friendly guidelines.
"We expect to complete manual reviews of Google Preferred channels and videos by mid-February in the US, and by the end of March in all other markets where Google Preferred is offered," added Muret.
Also included in the update, was the announcement of a series of tie-ups with third parties which Google claims will offer advertisers more control over what content they deem as suitable to have their ads served next to.
This involves the introduction of a three-tier suitability and brand safety reporting system that will allow advertisers to reflect their view of appropriate placements for their brand.
YouTube is currently working in beta with Integral Ad Science (IAS) to launch such a service, with other partners including DoubleVerify, as well as ComScore, Moat and OpenSlate set to follow.
“The challenges we faced in 2017 have helped us make tough but necessary changes in 2018,” wrote Muret.
“These changes will help us better fulfill the promise YouTube holds for advertisers: the chance to reach over 1.5 billion people around the world who are truly engaged with content they love.”
Additional changes to YPP to better protect creators
Meanwhile, a separate post co-penned by Mohan and Kyncl, notes how the changes will affect a significant number of YouTube creators, but the vast majority of those likely to be affected were earning under $100 per year via the video-sharing network.
“We’re making changes to address the issues that affected our community in 2017 so we can prevent bad actors from harming the inspiring and original creators around the world who make their living on YouTube," read their post. "A big part of that effort will be strengthening our requirements for monetization so spammers, impersonators, and other bad actors can’t hurt our ecosystem or take advantage of you, while continuing to reward those who make our platform great.”
It went on to read: “We’ve arrived at these new thresholds after thorough analysis and conversations with creators like you. They will allow us to significantly improve our ability to identify creators who contribute positively to the community and help drive more ad revenue to them (and away from bad actors). These higher standards will also help us prevent potentially inappropriate videos from monetizing which can hurt revenue for everyone.”