YouTube has said AI is, in some cases, better than humans at purging extremist videos from within its walls, and that machine learning tech has helped it double the speed at which it is able to take down content which violates the rules.
According to YouTube its AI systems have proven more effective than human beings at flagging videos which need to be removed.
Just over a month ago the platform’s parent company announced plans to increase its use of machine learning technology to help it identify extremist and terrorism-related videos on YouTube. The move was part of a four-pronged strategy to combat the spread of such content online following a brand safety crisis earlier on this year, during which giants like M&S, the Guardian and the UK government pulled ad spend from YouTube and the Google Display Network following concerns over unintentional ad misplacement.
The tech giant has now posted an update saying it has made progress in tackling the issue, which in some cases resulted in neo-Nazi videos and extreme pornography appearing adjacent to ads from household names.
YouTube claimed that during the past month or so when it's been testing new AI-powered detection and removal tools that over 75% of the videos it has removed for violent extremism were purged before receiving a single human flag. The platform has said it believes the accuracy of its systems have improved “dramatically” due to machine learning.
With over 400 hours of content being uploaded to YouTube each minute, there was previously a significant challenge in finding and taking action over such footage, but the video giant said its initial use of machine learning has “more than doubled” the number of videos it has removed featuring violent or extreme content, as well doubling the rate at which such content is removed.
YouTube has always used a mix of technology and human review to address controversial content on YouTube, but the latest developments indicate that investment in the AI space following the brand safety furore is bearing fruit.
Google's strategy to tackle the spread of extremism online and protect advertisers, also includes tougher standards for videos and the recruitment of more experts to flag content in need of review. Earlier this year, Google also inked a deal with ComScore to provide independent verification that its inventory is brand safe.
The platform said it has started working with 15 more NGOs and organisations, including the Anti-Defamation League and the Institute for Strategic Dialogue in an effort to improve the system’s understanding of issues around hate speech, radicalisation and terrorism to better handle extremist content.
In the wake of the brand safety furore some Omnicom was one such ad giant which took it upon itself to calm advertisers worried their ads are at risk of being misplaced against inappropriate content on YouTube using a mixture of AI and human intelligence. At the time the holding group detailed plans to sift through hundreds of thousands of YouTube videos daily to ensure they are safe for its advertisers to appear against.