Artificial Intelligence AI Agency Models

Four top ad agencies share their ‘ethical codes of conduct’ for generative AI

Author

By Sam Bradley, Journalist

March 27, 2023 | 11 min read

Media.Monks, R/GA, Ogilvy and Dept are developing ethical guidelines for how they use generative AI. The approaches differ but the consensus is that the rules are needed as soon as possible. Here’s where they’re at.

AI GPT-4

The use of AI tools like ChatGPT-4 brings a host of ethical issues to the doors of ad agencies / Unsplash

Dutch digital agency Dept has joined Media.Monks, Ogilvy and R/GA in developing an ethical framework to guide the use of generative AI in advertising, The Drum has learned.

Agencies hoping to utilize tools like ChatGPT and MidJourney face several ethical issues. In response, digital agencies have laid out rules or “core principles” that can anticipate issues that could rise from AI integration, such as navigating redundancies caused by the tech, understanding what jobs generative AI and machine learning tools should be used for.

Sample bias is a further issue. Datasets used to train generative AI tools could reproduce popular biases, such as racism or sexist language and in some cases, datasets may have violated copyright law – a concern highlighted by Getty’s lawsuit against the makers of Stable Diffusion. And, there is the question of where and how agencies signpost the use of AI in advertising creative.

According to vice-president of emerging tech Isabel Perry, “it’s incredibly important” that agencies like Dept settle on an ethical framework now, rather than in “three years’ time.” The agency says AI applications already enable 30% of its annual revenue, a figure it predicts will rise to 80% within two years. Meanwhile, 10% of its overall revenue will come from its machine learning operations arm. Clients already using the agency’s AI services include eBay, Phillips and Seacor.

Its framework accompanies ‘foundational values’ that commit the company to grow its use of AI to improve data analysis and “dimensionalize” new ideas faster – and which state the agency will “do our best to offer training” to anyone in a job replaced by AI “in an effort to retain employment.”

The rules are as follows.

Dept’s AI guidelines

  • “We consider the long-term implications of the use and adoption of AI.”

  • “AI allows us to collide perspectives and experiment with the outcomes, which gives us the ability to expand the influence lesser known creators impact on the world.”

  • “We collect data and train AI ethically exposing to our clients the inputs guiding our intelligence.”

  • “We believe that AI is a tool to identify opportunities to improve the functionality of our marketing and technology deliveries.”

  • “We believe that AI automation serves as a way [for the] team to focus on more innovative tasks within our business.”

Dimi Albers, the agency’s chief executive, says these principles should prepare the business for any coming legislative changes.

“There will be legislation, but it will be vastly behind where the actual developments are. That's the case with every technology,” he says. “As a services company, it’s about how we anticipate the way that governments are going to look at companies, and how we can ensure that we are already doing the right thing.”

In addition to the framework, he says Dept is putting in place “a formal board” to govern on questions of AI ethics before the end of spring. That development, he hopes, will help the agency spread its AI principles to clients. “If you have that, it will be slightly easier when you work with clients to have these conversations. Businesses are going to say ‘How can I use this to be faster or more efficient?’ It’s up to us to figure out how we can do that in a way that everyone still feels good about.”

Dept’s guidelines represent a work-in-progress, and Perry says the company plans on taking input from the entire company as they’re finalized. “We’re going to have an all-company meeting where we present our AI ethics and principles because it’s really important that we give every Dept-ster an opportunity for debate around this.

“We’re also going to be presenting very short guidance for lawyers in that session because inevitably... we need to be giving good advice to every single persona about copyright and IP and what and what not to share with these tools, what this means for clients,” she adds.

AI bible

It’s not the only agency to create a formal set of guidelines. Challenger network Media.Monks have published its own, which states that the use of AI within its business “requires deliberate consideration of representation, bias and potential harms”.

It also reminds staffers of the risks to intellectual property around AI use: “We will always be stewards of our clients’ brand reputation, intellectual property, data security, and potential for growth. That stewardship extends to any AI-centric technology we use or recommend in the course of service and any original AI-powered tools that we build in partnership with them.”

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

While it doesn’t lay out a specific commitment to workers displaced by generative AI, it does contain a commitment to “redesign our operation and provide our people the access and training they will need to unlock their potential in this AI-assisted future”.

Dept and its rivals are not the first organizations to consider the ethics of using AI. People have been thinking about how to use these tools in a responsible way since Descartes and Leibniz. Many agencies, though, have already begun trading on their expertise without a formal ethics framework in place.

At Ogilvy’s AI.Lab unit in Paris, agency executives are midway through writing a set of “10 commandments” that will guide their colleagues’ use of AI tools. Mathieu Plassard, president and chief executive officer of Ogilvy Paris says “we have a few documents, but what I want to do is formalize them into an AI bible.” It’s already laid down some “generic principles” regarding supervision and sample bias.

“AI will not substitute creativity and human oversight is essential at all stages,” he says. “We know generative AI learns from platforms, so we'll only use AI when we're confident of the sources... so we’re not putting anything at risk when it comes to our [reputation] or our clients’ reputations. We are transparent to our people and to our clients and we’ll make it clear when we use AI.”

While the team has to follow the agency’s own legal guidelines, he notes that staffers will also have to be aware of their clients’ ethical rules – and that these might not match up. ”We’ll have to make sure we meet the client's regulations,” he says. “And when we input or prompt with some element of a brief, we must ensure we don't reveal any client IP or secrets.” The agency’s ‘AI Academy,’ announced recently, will train staffers in ethics alongside other skills, he adds.

His colleague, executive creative director David Raichman, says that training staffers in AI skills is itself an ethical principle. “We don’t want to give our employees the message that you will be replaced by an AI in the future. We want to say that in the future, you will have new skills, you will be improved and you will learn to manage this.”

He argues that the ethical way forward is for the agency to find ways to “preserve artists, to find new ways to work. For example, if we like an illustrator, instead of stealing his time, maybe we can work with him, training a generative AI with his illustrations. This is an ethical imperative for the agency to preserve talent.”

Industry standards

At R/GA, London executive creative director Nick Pringle says the agency is also working towards a formal set of rules. Because the agency has only been using generative AI in a “discrete” way, rather than “systematically” throughout its organization, Pringle says much of the ethics work has focused on establishing norms around legal permissions and transparency. “We get a ton of pre-market information in our briefs, a lot of trade secrets. So everyone has to be aware that everything they’re typing in, will be owned by a third party. It’s unlikely that it’s going to get out… but it’s not private.

“We’re not creating production-ready assets, so we can’t infer to a client that [it] can be taken on and used commercially,” he explains. “If we are showing work for a client that has used AI then there has to be a watermark, we have to be total]y upfront. We have to state which platform we used.”

So far, Pringle’s team has tried to use AI as a corrective, rather than to drive production directly. “It’s sort of like the principles of aikido, the martial art where you use your opponent’s strength against them. An example would be using ChatGPT to generate 10 tropes used in the confectionary market; now we know those tropes and can avoid them.”

Pringle argues that every agency employing generative AI in its work should state when, and how, that technology has been involved.

“I think it’s a completely reasonable request,” he says. “While we’re still in a legal gray area, and in a global economy where copyright infringement in the US is different to the UK… we have a real responsibility to do that.”

Artificial Intelligence AI Agency Models

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +