The Drum Awards for Marketing - Extended Deadline

-d -h -min -sec

Author

By Katie Deighton, Senior Reporter

June 12, 2019 | 8 min read

Deepfakes – videos and audio that take the #fakenews phenomenon to new heights – are posing a threat to Facebook's less-than-watertight policies, and informed democracy at large. The Drum explores their roots in the industry that created them: advertising.

The emergence of ‘deepfake’ video, audio and imagery is threatening to undermine our sense of a shared reality. But to what extent are advertisers and creatives colluding in this, and how transparent should they be when releasing synthetic work?

In the winter of 2017, Alan Kelly had an idea. The executive creative director of Dublin’s Rothco was watching documentaries on Netflix about the assassination of John F Kennedy (still a revered figure in Ireland, thanks, in part, to his ancestry) when he learned how, on that fateful day, the president had been on his way to the Dallas Trade Mart to deliver a speech.

“I’d never heard of it, or even knew he was going there to give a speech,” Kelly says. “I did a quick Google search to see if I could find the copy and it was easy to track down. Then it was like... what if? What if there was technology available that could recreate it?”

Kelly spoke to his producer the next day and the two sought out the expertise of CereProc, a text-to-speech company known for storing the voices and phraseology of motor neurone disease patients in Scotland. The Times of London, which was launching its print edition in Ireland around the same time, then commissioned the project as a stake in the ground for its informed, in-depth and global positioning.

Eight weeks, 831 recorded speeches and millions of assembled sound fragments later, the speech was recreated. The Times released the work as ‘JFK Unsilenced’, a campaign that subsequently picked up the Creative Data Grand Prix at Cannes Lions.

Around the same time, the term ‘deepfake’ began to gain traction online. A portmanteau of ‘deep learning’ and ‘fake’, the word is thought to have originated on a Reddit forum whose members used it to describe the process of compositing celebrities’ faces on to the bodies of porn actors. It soon became the catch-all term for video, audio and imagery that has been imperceptibly distorted through machine learning or CGI or both – although this process had existed in advertising and Hollywood for a number of years.

Dior revived Marilyn Monroe and a host of other dead starlets for its 2011 J’Adore commercial, while Galaxy (Dove in the US and some other markets) gave Audrey Hepburn the same treatment in 2013. Most famously, Forrest Gump’s producers had ‘John Lennon’ speaking directly to Tom Hanks 18 years after the singer’s death.

But ‘JFK Unsilenced’ marked the advent of deepfaked audio without the use of voice actors, and landed just as software was turning a once-complex CGI task into a quick and easy AI job. With its political connotations, the campaign found itself at the center of a debate regarding the ethical implications of a creative process previously confined to the realms of entertainment and advertising.

The conversation climaxed when videos surfaced of world leaders making statements they never actually made: ‘Barack Obama’ calling Donald Trump a “total and complete dipshit” in a cautionary video from BuzzFeed, for instance.

Unsurprisingly, then, Catherine Newman – the chief marketing officer behind The Times’ ‘JFK Unsilenced’ – ardently puts distance between the campaign and the more nebulous deepfaked videos slowly filtering through the internet.

“This could have very quickly been taken as fake news had it not had credibility,” she says. “People may not have understood the complexity that Rothco overcame and the work that was put into it, which we obviously validated on our side.”

The sheer effort that went into the 22-minute sound file is proof that deepfakery is not yet as problematic for democracy as the recent scourge of fake news.

“It doesn’t appear to be ready for primetime,” says tech ethicist David Polgar.

““In the examples online, a considerable amount of time, money, energy and technical equipment is evident, meaning that most of the debate around deepfakes is a couple of years down the road. Current methods of distributing fake news tend to still involve Photoshop or falsified documents.”

However, it is likely that the techniques used by Rothco, BuzzFeed and a number of other media and marketing companies will become affordable and democratized sooner than we think. When that does happen, Polgar is less concerned with their immediate influential effects (although publications such as the Wall Street Journal are already schooling their journalists in how to spot deepfaked videos before reporting them) and more with their repercussions.

“Deepfakes could shatter our previous notion of a shared reality,” he explains. “You see a video and believe it to be some objective window into a truth. So, the bigger concern is that it’s going to allow politicians specifically to deny authentic videos by saying ‘it’s clearly a deepfake’.”

If the future does hold mass democratic confusion and collusion over what was once objective reality, to what extent would it be the fault of the advertising industry? By bringing the technology to the masses – like it has with virtual and augmented reality – is it at least partially responsible for bolstering the power of weaponized fake news?

Iain Tate, who produced a deepfake-inspired film for the artist Gillian Wearing at Wieden+Kennedy London, admits that anything involving emerging technology makes him “feel a bit uncomfortable”, explaining that “the potential for ‘corruption’ of deepfakes just happens to be a bit more obvious and media-friendly – because it’s so visual – than other technology”.

But he hopes the film, which was made for Wearing’s Cincinnati Art Museum exhibition and portrays strangers ‘wearing’ the artist’s face, will spark debate rather than inspire nefarious practice.

“Personally, I hope that people who see the piece start to query the images and personalities they’re exposed to through their algorithmically filtered feeds.”

On the other side of the Atlantic, Goodby Silverstein & Partners recently unveiled a similar campaign. The agency worked with the Salvador Dalí Museum in Florida to generate an AI-bred avatar of the late Spanish surrealist, which was created by scraping footage, photographs and interviews with the artist.

“We fully understand the possible ramifications and applications of this technology when it comes to fake news or impacting elections or, of course, involuntary porn,” says Roger Baran, a creative director at GS&P.

“But there are always evil applications and good applications. It was like that with nuclear energy, it’s like that with social media and it’s going to be like that with AI and deepfake. We can’t stop progress, but we can use our brainpower, passion and resources to show that it doesn’t have to be nefarious.

“This application of AI doesn’t have to be used only for shady and scary ends. It could give us magical experiences and open new opportunities for culture, entertainment and education.”

Polgar agrees and even uses the same nuclear analogy, but he also believes that advertisers and creatives need to take tangible steps to usher in the ‘magical’ and banish the nightmarish. He anticipates the emergence of tools that will be able to detect videos that feature warped realities, but more importantly advocates for badging campaigns that use deepfakery much like sponsored influencer posts should be labeled, for instance.

“The general public strongly desires to know if what they are interacting with is real,” he says. “That doesn’t mean people don’t want synthetic media, it just means that blurring the line between real and synthetic without transparency is disrespectful. “The advertising industry can and should take a stance and clearly distinguish between the two.”

high res cover

This feature first appeared in the cyberwarfare issue of The Drum magazine. In it, we take a look at the role of our industry in a world where humdrum technology and everyday communication have become weaponised, from our smart homes being hacked and our fridges held to ransom to fake news. You can get your copy here.

Advertising

More from Advertising

View all