Google removed over 8m spam or pornographic videos from YouTube in 2017 over a three month period
Google removed over 8m videos from YouTube over a three-month period of October to December 2017 that were either spam or pornographic content according to its first quarterly report for 2018.
According to the tech giant, more than 6m of these videos were first flagged for review by its machines.
According to the tech giant, more than 6m of these videos were first flagged for review by its machines and 76% were removed before they received any views.
The use of machine learning to vet content and the quarterly report is part of its four-step action plan announced in December 2017, which also included hiring more people to review content and implementing stricter advertising criteria to promote brand safety.
According to Google, it has since hired more full-time specialists with expertise in violent extremism, counterterrorism, and human rights, as well as expanded its regional expert teams of over 150 academics, government partners, and NGOs around the world and in Asia.
In March, Google announced it removed 3.2bn ‘bad ads’ in 2017, blocked 79m ads on its network for automatically sending people to malware-laden sites and removed 400,000 of these unsafe sites.
The four-step action plan has seen Google’s revenues continue to thrive with its parent company Alphabet, reporting revenues of $31.1bn for the opening quarter of 2018, yesterday (April 23).
Speaking during the company's most recent results call, Sundhar Pichai, Google chief executive officer, spoke about his outfit's attempts to keep YouTube a brand safe environment.
"We're also investing in new experiences like live content where we see tremendous momentum," he said. "Even as we invest in new experiences, we remain very focused on great content. We aggressively combating content that violates our strict policies through a combination of user and machine flags."