Meta to require disclosure of AI-generated content in political ads starting next year
The new policy is announced amid growing concerns about the potential harms of AI-generated deepfakes – and other forms of digitally modified content – leading into the 2024 US presidential election.
Meta joins a host of other tech companies who have taken steps to require the disclosure of digitally modified content. / Adobe
Meta has become the latest big tech company to take steps against tech-enabled misinformation in political advertising.
Today, the company – owner of Instagram, Facebook, Messenger and WhatsApp – announced a new policy requiring advertisers to disclose some socially or politically charged content that’s been digitally created or modified through the use of AI or another technology.
Slated to go into effect sometime next year, the new global policy aims in part to combat the rise of deepfakes – fraudulent depictions of real persons or events created using AI – in political ads. This is an issue that’s become increasingly urgent in political, tech and marketing circles as the United States draws closer to next year’s presidential election.
Though some significant steps are currently being taken in the direction of direct governmental regulation of AI, it’s currently up to private companies to mitigate the risks that the technology poses, such as its potential to spread harmful misinformation. To that end, Google recently announced that verified political advertisers using its platform to promote an individual running for office would need to disclose digitally modified content. Other platforms, including Adobe and TikTok, have implemented labeling policies in an effort to notify users when AI has been used in the content creation process.
Once Meta’s new policy goes into effect, advertisers using the company’s platforms will, in some cases, “have to disclose whenever a social issue, electoral or political ad contains a photorealistic image or video, or realistic-sounding audio, that was digitally created or altered,” the brand wrote in a blog post published this morning. Advertisers will, for example, need to disclose when technology was used to “depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.”
Motherboard reported yesterday that AI-generated, photorealistic images that appear to depict bombed-out buildings and other forms of destruction in Israel and Gaza are being sold on Adobe’s stock image site. Some are clearly labeled in their title as being created using AI, while others aren’t, according to the report.
Meta wrote in this morning’s blog post that advertisers won’t need to disclose digital modifications, such as some cases of image sharpening or color correcting, which “are inconsequential or immaterial to the claim, assertion, or issue raised in the ad.” Ads that aren’t properly disclosed could be removed, the company added, “and repeated failure to disclose may result in penalties against the advertiser.”
Advertisers will be given the option to disclose content that meets the forthcoming policy’s criteria during the process of building an ad on Meta platforms.
For more on the latest happenings in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.