Facebook has moved to head off concern surrounding the proliferation of so-called deepfake videos, which can be tricky to distinguish from genuine clips, by proposing a partial ban on the future use of the technology.
Part of broader measures to clampdown on the spread of disinformation the toughened stance would see videos deemed to be too realistic as well as those edited via machine learning algorithms excised.
Deepfakes have become an increasing bone of contention for the platform as advancing technology blurs the boundaries between what is real and artificial, sparking fears that in the wrong hands' such techniques could be used to smear political opponents.
Despite such concerns, Facebook has opted against a universal ban, opting instead to sanction doctored videos which are intended as parody or satire. Lightly edited footage which simply reorders or amends spoken words will also be given a pass.
Evincing the new policy Facebook’s Monika Bickert, veep of global policy management, remarked: “Videos that don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers.
“If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.”
To make good on these promises Facebook has invested $10m in a deepfake detection system designed to weed out policy-violating material from its pages - after coming under pressure from US senators to act.