Facebook is upping the ante around how it punishes offenders who circumvent its review policies and encourage users to click on phony links.
Just as it is using AI to help counter the spread of terrorist content, the Silicon Valley behemoth is using a combination of machine learning and human review processes to stamp out ads which create “negative and disruptive experiences” for people.
Facebook has already slapped a ban on thousands of advertisers carrying out the practice, which it is referring to as ‘cloaking’.
The financially-motivated technique sees ‘bad actors’ disguise the true destination of an ad or post, or the real content of the URL, to take users to an unrelated page. Actors can then generate revenue from these views or clicks via affiliate deals. A simple Google search throws up thousands of tutorials on how to partake in Facebook cloaking; making it easily accessible.
These unrelated pages served up to users often host content that can be shocking if a user wasn’t expecting to view it – like pornography. They can also be linked to the promotion of muscle building or cosmetic scams.
Cloakers pull the wool over Facebook’s eyes by showing one ad to Facebook’s approval team and another to the audience clicking on the ad, but the social giant has said it is using a combination of AI and expanded human review processes to help it identify, capture and verify cloaking.
“We can now better observe differences in the type of content served to people using our apps compared to our own internal systems,” said Facebook’s product manager director Rob Leathern in a blog post co-written with software engineer Bobbie Chang. “In the past few months these new steps have resulted in us taking down thousands of these offenders and disrupting their economic incentives for misleading people.”
Facebook has said it will remove any pages caught engaging in cloaking, and that it will be collaborating closely with other companies in the industry to find new ways to combat and punish bad actors. The drive from the tech giant comes amid a crackdown on fake news and spam content within its its walls.
The motivation spurring on individuals and businesses to use cloaking to make money from page views generated via Facebook and Google links is clear - and it comes down to eyeballs. The duopoly are the two largest media owners in the world, having captured a combined 20% of international ad expenditure in 2016.
Last year, Google said it disabled 1300 advertiser accounts for partaking in ‘tabloid cloaking’ – which sees rogue advertisers game the system by presenting users with misleading URLs posing as links to news articles.
The move from Facebook comes amid a bigger push for digital players to make the digital ecosystem more transparent and improve digital experiences for users. There have also been calls from some of the world’s biggest advertisers, - including Unilever and P&G - to do more to tackle industry-wide issues like ad fraud.
In 2016 alone Google removed 1.7bn bad ads for a variety of offenses. The total included 68m ads for unapproved pharmaceuticals, 5m ads for payday loans and 80m ads which were deemed to be deceiving or shocking to users.