Artificial Intelligence AI Generative AI

The 5 biggest unresolved issues surrounding the use of AI in marketing

Author

By Webb Wright, NY Reporter

October 11, 2023 | 10 min read

We spoke with marketing experts to find out about the biggest AI-related dangers that are weighing on their minds.

Image

A plethora of companies are working to develop new and more advanced AI models. / Adobe Stock

Marketers are scrambling to give themselves crash courses in AI and to integrate the technology into their workflows. At the same time, many across the ad industry have come to recognize the need to impose guardrails around the technology – the advancement of which is continuing at a breakneck pace, despite a call from experts earlier this year to impose a six-month moratorium on the development of AI models more powerful than OpenAI’s GPT-4. (Elon Musk, one of the signatories of the open letter calling for the pause, quickly went on to launch his own AI company with a stated goal to compete with OpenAI.)

Last week, the Advertising Association – a UK-based trade association – announced the launch of an AI task force aimed at helping the UK’s ad industry to navigate both the promise and perils presented by AI. Co-chaired by representatives from Google and VCCP, the task force held its first meeting on September 25 and will continue to gather every six weeks over the next year.

Following the launch of the association’s AI task force, we wanted to find out: what are the most urgent, unresolved issues surrounding AI in marketing?

Here are five that stand out.

1. Transparency

The ability of generative AI models like ChatGPT and Midjourney to create content that is nearly indistinguishable from that created by human beings is both their most impressive and their most potentially dangerous feature. Some major tech companies have taken some early steps towards ensuring that AI-generated content is clearly labeled as such, but this is likely only to become a trickier goal to achieve as generative AI becomes more sophisticated and commonplace.

As a result, some marketers feel that the use of generative AI within their industry carries with it the need to clearly communicate such use to viewers. “While there are several issues to address [regarding the use of AI within marketing], transparency is at the top of my list,” says Brian Yamada, chief innovation officer at VMLY&R. “As an industry, we need to be transparent to audiences when they are interacting with an AI-powered experience – especially when they are directly interacting with it.”

2. Accuracy

Another major challenge that marketers face in their use of AI revolves around the tendency for popular AI-powered chatbots like ChatGPT and Google’s Bard to “hallucinate” or produce factually inaccurate information with the confidence of one who is telling the truth. Much has been written about this phenomenon over the past year, including a memorable front-page story published by the New York Times in which Microsoft’s GPT-4-enabled Bing chatbot took a reporter on a hallucinatory roller coaster ride of a conversation, naming itself Sydney and trying to convince its interlocutor to leave his wife so that he and it could be together.

Although the surge in press coverage has led to new guardrails from the tech companies who are building these bots (Microsoft began to limit the number of questions that users could ask Bing per conversation, as well as the number of conversations that users could engage in with the search engine per day, following the Times story), the hallucination issue is still far from being fully resolved; marketers – and anyone else using the technology for, say, research purposes – should take care to approach the writings of ChatGPT and other AI-powered chatbots with a grain of salt, and a thorough fact-checking process.

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

3. Employee morale

Another oft-written-about phenomenon that has been precipitated by the rise of advanced AI models over the past several months is anxiety among many categories of professionals – including marketers – who fear that AI will soon make them obsolete.

A recent Gallup Poll found that roughly three-quarters of Americans believe that AI “will decrease the total number of [US] jobs over the next 10 years.” Another report published by Goldman Sachs in April found that advancements in AI could lead to the automation of 300m jobs around the world. And many young professionals are concerned that jobs in marketing will increasingly be ceded to intelligent algorithms.

New advancements in AI are likely to kindle new fears about job security, both within marketing and across a host of other industries – such as graphic design and journalism – which revolve around creativity and content production.

4. Ownership

As the popularity of generative AI has continued to grow over the past year or so, so too has the knowledge that this technology is often ’trained’ using copyrighted assets and content without the owners’ consent. Companies like Midjourney and Stability AI – creators of popular image-generating AI platforms – have been sued by artists who claim that the companies plagiarized their work, while Open AI has been the subject of a string of lawsuits filed by authors who claim the company illegally used their written work to train the company’s large language model (LLM).

Questions surrounding copyright within the still-burgeoning field of generative AI will likely take some time to be resolved by the courts. In the meantime, marketers would do well to pay attention to these developments. “In regard to AI, ownership is the single most important issue,” says Code and Theory cofounder Dan Gardner. “All other issues have a cascading effect from ownership.”

5. Consent

Recent escalation of the use of deepfakes within marketing has underscored the critical importance of consent – that is, the need to gain express permission from an individual before using generative AI to represent them in an ad. (That need is obviously complicated when the deepfake of the person in question is deceased, as was the case in a recent Volkswagen commercial.)

Some marketers are excited about the rise of deepfake technology, pointing to the potential for, say, celebrity cameos that can be produced without actually needing to fly said celebrity to the production site. However, others are concerned that the industry is becoming too complacent about consent when it comes to the use of deepfakes. “I’ve got a list as long as my arm of brands that think [generative AI] means you can get around permission from talent or influencers appearing in content – it doesn’t, and if you ignore that, you will get punished in the long-term both legally and by audiences,” says Martin Adams, cofounder of AI companies Metaphysic.ai and Codec.ai.

For more on the latest happenings in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.

Artificial Intelligence AI Generative AI

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +