Google Youtube Technology

Google announces four measures to combat spread of extremist content on YouTube

Author

By Jessica Goodfellow, Media Reporter

June 19, 2017 | 4 min read

Google has announced four measures it will use to tackle the spread of extremist content on YouTube having come under mounting pressure from governments and brands to stop enabling terrorist propaganda.

Google announces four measures to combat spread of extremist content on YouTube

Google announces four measures to combat spread of extremist content on YouTube

The company said it is working with government, law enforcement and civil society groups to tackle the problem of violent extremism online, which it said should have “no place” on its services.

“Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all. Google and YouTube are committed to being part of the solution,” wrote Kent Walker, the senior vice-president and general counsel of Google, in an editorial published in the Financial Times newspaper.

Walker said the Google had worked hard to remove terrorist content for a long time but acknowledged that more had to be done.

The first of the four steps will see the company use machine learning to train its automated systems to better identify terror-related videos on YouTube.

Recognising that technology is “not a silver bullet”, YouTube is also almost doubling the size of its Trusted Flagger programme, a group of experts with special privileges to review flagged content that violates the site’s community guidelines.

It will be adding 50 expert NGOs to the 63 organisations who are already part of the programme, which Google will provide with additional grant money. The expanded effort will allow the company to draw on specialty groups to target specific types of videos, such as self-harm and terrorism.

The third step is to take a tougher stance on videos that do not clearly violate YouTube policies. This includes “videos that contain inflammatory religious or supremacist content,” for example. These videos won’t be removed, but will be hidden behind a warning, and won’t be permitted to generate advertising revenue.

The move will be welcomed by those brands who found themselves victims of inadvertently funding terrorism and hate speech by having their advertising appear on unverified YouTube videos, following a series of Times investigations on the matter.

The investigation, which led to several major brands pulling their ad spend on YouTube till the issue was ironed out, led Google to revamp its ad policy to give brands greater control over where their ads appear and more aggressively police “hateful, offensive and derogatory content.”

Finally, the company will do more with counter-radicalization efforts by building on its Creators for Change program, which will redirect users targeted by extremist groups such as Isis to counter-extremist content.

“This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining," Walker wrote.

In addition, the company said it would work with Facebook, Microsoft and Twitter to establish an industry body that would produce technology other smaller companies could use to police problematic content.

"Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free," wrote Walker. "We must not let them."

Google's announcements comes a few days after Facebook made a similar pledge to help counter the spread of terrorist content on its platform using artificial intelligence (AI), building a team of security experts, as well as increasing cooperation with third parties.

Google Youtube Technology

More from Google

View all

Trending

Industry insights

View all
Add your own content +