YouTube has taken action in the UK and other English-language markets to limit the spread of so-called "borderline content" including false and extremist material, by tweaking its referral algorithms to reduce their reach.
An earlier trial in the US is credited with halving views of toxic content sparked by recommendations.
As such, the video-sharing giant is now actively moving against content which "brushes right up" against its policy line, championing verified and accurate material over potentially damaging videos.
At the heart of this new policy is a commitment to reduce the spread of borderline content; remove banned content as quickly as possible; promote authoritative voices and set a higher bar for which channels can monetise their content.
YouTube chief exec Susan Wojcicki said: “A commitment to openness is not easy. It sometimes means leaving up content that is outside the mainstream, controversial or even offensive. But I believe that hearing a broad range of perspectives ultimately makes us a stronger and more informed society, even if we disagree with some of those views.”
Concerns have long been raised over YouTube’s recommendation algorithms, which have been accused of prioritising engagement over factually-correct material; allowing videos around 'miracle cures', flat Earth claims and conspiracy theories to proliferate.
These fears have already prompted YouTube to devise a fact-checking tool for debunking conspiracy theories.