Tech Artificial Intelligence Brand Strategy

After Emma Watson deepfake ad scandal, experts share risks (and rewards) of synthetic media

Author

By Kendra Barnett, Associate Editor

March 8, 2023 | 12 min read

Synthetic media is here to stay. And that’s not necessarily a bad thing, marketing experts say.

Pixelated eye

Synthetic media will soon be ubiquitous in advertising, experts predict / Adobe Stock

English actress Emma Watson made headlines this week – but not for a forthcoming film or new philanthropic effort. Rather, Watson’s likeness was used in a series of sexually suggestive deepfake ads that appeared on Meta platforms including Facebook, Messenger and Instagram, as reported by NBC. Scarlett Johansson’s likeness was also exploited in some of the ads, which promoted an app called Facemega, which markets itself as a tool for creating ’DeepFake FaceSwap’ videos.

Deepfakes are a type of synthetic media that uses AI to visually or audibly manipulate content – often swapping a person’s face for a believable depiction of a celebrity or politician, tricking viewers into believing that the famous person said or did something that never happened.

The series of provocative deepfake ads deployed on Meta platforms totaled more than 230, a review of the tech company’s ad library found.

The ads ignited a Twitter storm when a freelance journalist and student named Lauren Barton tweeted out a screen recording of one of the ads, which she said she encountered on a photo editing app. The tweet garnered more than 10m views.

Kat Tenbarge, the NBC journalist who broke the story yesterday, wrote on Twitter: “This is just a new way to sexually harass women. It’s taking away your bodily autonomy and forcing you into nonconsensual sexual material, which can then be used to humiliate and discriminate against you. Not to mention the psychological effects.”

Meta pulled the ads after learning about them. “Our policies prohibit adult content regardless of whether it is generated by AI or not, and we have restricted this Page from advertising on our platform,” a company spokesperson told The Drum.

Facemega has reportedly been wiped from Apple’s App Store but is still available on Google Play.

This isn’t the first time that deepfakes have been used in advertising. Just last month, a TikTok ad featuring comedian and commentator Joe Rogan endorsing a “libido booster for men” was widely suspected to be a fake. TikTok removed the video. Now, technology experts and ad industry leaders say this is just the beginning – as AI becomes more advanced, advertising is likely to see an influx of deepfakes.

The rise of synthetic media in marketing and advertising

Meta’s deepfakes scandal evidenced a much larger trend playing out across the internet.

“The use of synthetic media is becoming more pronounced, not just in ads, but in a myriad of media,” says Dr Shawn DuBravac, a technology writer and futurist. Advertising, he says, is an obvious application; he points to good-natured examples in which celebrities such as Shah Rukh Khan, David Beckham and Bruce Willis have agreed to star in ads that employ synthetic media techniques.

As the technology becomes more widely-available, it will become a staple in the advertising arsenal, DeBravic predicts. “We are on the cusp of a massive transformation. In the years to come, nearly every brand will use synthetic media technology in some ways,” he says.

The reach and implications go far beyond basic brand advertising – last year, during his run for office, Yoon Suk-yeol, who is now president of South Korea, deployed a number of deepfake videos promoting his campaign messaging. Some estimate that the tactic helped him to win favor among the public.

In a more nefarious example, early on in the Russia-Ukraine conflict, hackers infiltrated a Ukrainian television station and deployed a deepfake video of president Volodymyr Zelenskyy ordering his troops to surrender.

But not all applications of synthetic media are ill-intended – in fact, some are designed to promote social causes or values. In the midst of the 2020 US presidential election, RepresentUs, a bipartisan nonprofit organization dedicated to advancing voting rights, aired ads featuring deepfakes of North Korean dictator Kim Jong-un and Russia’s Vladmir Putin that urged Americans to fight political corruption, voter suppression, gerrymandering and other election issues in the name of democracy.

“Like any other tech, deepfaking or AI-generated content can be used for good or for ill, depending on who is wielding the tool and to what end,” says Kerry McKibbin, president and partner at Mischief, the creative agency that developed the campaign for RepresentUs. “In our case, it was critical for people to understand that we were trying to provoke but not dupe them – which is why we stated within the ads: ‘the footage is not real, but the threat is.’”

In another cause-focused example, Dove worked with Ogilvy last year to create ‘Deepfake Moms’ – a campaign that sought to illuminate the toxic nature of much of the beauty advice doled out by influencers on social media. The team created deepfakes of real mothers of teen girls, and put common influencer-endorsed advice in the moms’ mouths – from getting Botox to using simple ‘hacks’ to cinch the waist – to the shock and horror of those same moms and daughters.

The promise of AI

Many in the media and marketing space see synthetic media as a broad category that also includes technology like generative AI programs like OpenAI’s wildly popular ChatGPT – tools for which there is obvious creative potential.

Just this week, Publicis Media launched an internal course on AI-based synthetic media for some 70 employees, who learned about the state of AI today and how the technology can be applied in innovative ways in media and brand marketing.

The global agency network is bullish on the potential of synthetic media. It has incorporated AI into its campaigns for some time already – in 2019, Publicis’ Spark Foundry launched an audio campaign that deployed AI-generated music personalized to individual consumers’ unique tastes. The following year, Publicis Mexico teamed with Mexican human rights organization Propuesta Cívica to create a campaign featuring a deepfake of Javier Valdez, a journalist who was murdered in 2017 in Culiacán, in an effort to advance press freedom and demand justice for journalists who have been assassinated.

“We’re very much attuned to how to use these tools, and how to activate them appropriately – and if we're unsure, we do all the research needed to make sure that we follow those rules, because we don’t want to get into hot water,” says Andrew Klein, Publicis Media’s senior vice-president of creative technology.

“It’s going to be good to see how we create the best experiences using [synthetic media] and unlock some new creative potential.” Klein says that Publicis Media has a number of additional AI-based projects coming down the pike and is also working with clients to develop their own custom-made AI models.

Other industry leaders agree that synthetic media will be able to realize new value. For example, various kinds of deepfakes or digital twins can help “drive down cost, reduce travel and carbon footprints and can even extend beyond the realms of reality,” says Grace Francis, chief creative and design officer at media agency WongDoody. She anticipates that demand for digital twins and synthetic media will only grow.

Navigating a new minefield

Of course, the potential dangers of synthetic media remain apparent to many in the field. “The risks are clear – misleading consumers, emotional or financial damage to those who are deepfaked,” says Mischief’s McKibbin.

Plus, as Alex Wilson, executive creative director at brand experience agency Amplify says, “as the technology continues to refine, and AI learnings get better and harder to spot, we could be entering a new age of fake news and unreality that could lead to some serious real-world issues.”

McKibbin, for her part, calls for increased regulation and consumer education to mitigate risk. “It’s hard to know what the future will bring, especially as deepfakes get less ‘uncanny valley’ and therefore less easily distinguished from reality – and also possibly cheaper, easier, and faster to produce. There needs to be a combination of regulation and consumer awareness. Either creators will eschew their use for nefarious purposes and platforms will get better at policing them, or consumers will get more and more suspicious of every single piece of content they take in, or a bit of both.”

Publicis Media’s Klein believes it’s more likely to be the former. He predicts that in the future, to combat misinformation vis-à-vis synthetic media, platforms will include disclaimers indicating when a post includes AI-generated content, much like the warning labels that Twitter and Meta have already rolled out to hamper the spread of false information.

He also stresses that the buck has to stop with those responsible for creating the content. “We really want our employees to understand how to tap into that power [of AI-generated content], but do so responsibly and not put brands at risk and not put the agency at risk.”

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

Future optimism for emerging tech

Despite the potential hazards, many in the field remain optimistic about the future of AI-driven marketing and media – even deepfakes. “As the tech gets better, it’s possible that deepfakes might be used in the future for interesting advantages or applications,” McKibbin says. “For example, a celeb might license their image to a brand for an ad, but not be able to accommodate shoot days, so deepfakes might become a logical solution to production constraints. Similarly, platforms like Shutterstock and Getty might do things like license out live video actors designed specifically as ‘digital canvases‘ for legal and ethical deepfakes.”

Some applications like these are already happening. Respeecher, a platform that harnesses speech-to-speech machine learning models, will be employed to voice the part of Darth Vader in future Star Wars installations – à la James Earl Jones, whose voice the program has licensed.

But experts say we’re still in the early chapters of the story – and the future is yet unknown. “We have only just begun to see all of the ways that synthetic media will be used,” says DuBravac. And as it becomes more challenging to decipher between authorized, ethical applications and malicious applications, he says, “transparency will play an increasingly important role when it comes to this technology.”

For more, sign up for The Drum’s daily US newsletter here.

Tech Artificial Intelligence Brand Strategy

More from Tech

View all

Trending

Industry insights

View all
Add your own content +