Artificial Intelligence Agency Advice Agency Culture

Does your agency have an AI usage policy yet? Here’s what you need to know

Author

By Sam Bradley, Journalist

June 6, 2023 | 12 min read

As AI adoption continues, we ask a selection of agencies how they’re designing formal usage guidelines.

still

How can agencies promote responsible use of AI tools without stifling experimentation? / Unsplash

The emergence of generative artificial intelligence (AI) as a viable tool for the modern workplace has thrown up myriad ethical and legal issues. Copyright, sample bias, transparency and accuracy all pose potential threats to businesses embracing the practice.

The first line of defense is a well-thought-through set of guidelines. Some agencies have already drawn up theirs, but plenty of businesses are still working out their approach – and some think it’s too early to even think about defining one philosophy on AI.

So, what should go into an AI usage guide? And how can an agency business keep everyone away from live wires, without stifling experimentation?

How do you solve a problem like… drawing up an AI usage policy?

Phil Fearnley, group CEO, House 337: “In the absence of robust government regulations on this potentially world-changing technology, our industry must focus on self-regulation. We’ve developed guidelines to focus on ethics, safety and trust in everything we do. These pillars allow us to work with AI to enhance creativity, optimize workflow and ensure AI enhances and doesn’t replace humanity in the craft. Understanding data allows us to consider ethical issues. Transparency, training and education are essential in building trust and collective learning. Our clients need to know what considerations we have undertaken on their behalf. Industry bodies should unite to develop policies and tackle the issue collectively.”

Chris Wlach, general counsel, Huge: “At Huge, we have equipped our teams with certain baseline guidelines about how to use Generative AI tools safely and smartly, taking into account data security, IP protection, transparency, bias and other concerns. At their core, our guidelines seek to balance those concerns against the tremendous power of these tools. As the technology develops, as the legal landscape changes and as industries react, our guidance for using these tools evolves too. It has to. Ultimately best practices and principles on these matters can’t be driven solely by lawyers or data security experts. They are shaped through constant learning and cross-team dialogue – one involving every employee at Huge.”

Elav Horwitz, global director of applied innovation, McCann Worldgroup: “We prioritize the ethical, empathetic, and inclusive impact of AI. When it comes to exploring the potential of generative AI, we bring together a diverse team of experts. In addition to our AI and tech specialists, we collaborate with DE&I experts, sustainability professionals, legal teams, finance experts, human intelligence strategists, and our core creative teams.

“This multidisciplinary approach allows us to delve into how AI influences society, creativity, inclusion and the industry. We address the responsibilities we hold as a global company, while seeking meaningful collaborations with tech companies, startups, and AI artists. Our goal is to leverage generative AI in ways that align with our values and make a positive difference. With a blend of creativity, curiosity, and caution, we experiment with generative AI tools to push boundaries, drive innovation, and prioritize social impact.“

James Calvert, chief data strategy officer, M&C Saatchi London: “When crafting your AI policy, common sense should prevail. Specify where and how AI should be used while noting ethics, fairness, transparency, and accountability. Have robust measures to safeguard sensitive data and maintain client confidentiality too.

“However, given the rapid pace and early phase of Gen AI, you should foster a ’learning loop culture’ to embrace mistakes and learn from them for oversight and quality assurance. Remarkably creative work arises from those who master the technology, not from just the tools themselves. Striking a balance between human expertise and Gen AI’s potential is key.“

Al Mackie, CCO, Rapp UK: “We’ve been working with clients for a while to understand the need for ethics steering committees. Not just for AI, but for all things data and tech. However, the explosion of AI and its speed of evolution has accelerated this need. Within the creative dept specifically, we have partnerships with most of the big players which helps. But we’re committed to continuous learning, adapting and evolving ethical considerations. To address the ethics of AI in creativity right now there’s a huge need for human oversight, transparency, and disclosure. That’s primarily because DE&I efforts are critical to combat biases and stereotypes in AI-generated content.”

mark izatt

Mark Izatt, creative director, Cream: “With AI advancing quickly and attitudes shifting almost daily, it’s crucial that everything is looked at on a case-by-case basis – at least for now. With each project, we are seriously considering the possible ethical pitfalls before even taking it on – such as gender and race bias, removing potential income from creatives, and plagiarism. When it comes to policies and contracts, we tap into resources from industry bodies such as the IPA, to ensure that we’re working alongside the wider workforce and not simply relying on the perspectives of the small team working on a given project.”

Simon Levitt, global creative technology director, Imagination: “As an experience design company we have always worked with cutting-edge technology to push our creativity. To guide this we have developed a cross-discipline AI framework to encourage exploration and experimentation, with simple and effective guidance that safeguards our clients and their customers. The rapid evolution of AI does not support strict policies but requires active input from our teams. We have devised gates where specialists review and endorse generative AI tools before we make them available for wider use. How we use AI won’t always be the same but unique to each brand, our guidance needs to reflect this.”

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

Reid Carr, CEO and executive creative director, Red Door Interactive: “We actually started with AI and asked ChatGPT to draft an AI policy for us. This gave us a general outline of considerations before routing it to our various subject matter experts in creative, media, analytics, HR and more to evaluate what it gave us. These different perspectives helped us ratify and implement our policy across Red Door, along with team training to help with any knowledge gaps. Knowing things will evolve, we’ve assigned different people to track potential policy changes as we learn more about the effects and requirements of AI in our industry.”

Brian Brown, president & chief creative officer, Ingredient: “At Ingredient, we hold our employees accountable when it comes to AI and as an agency, will never present a deliverable that is AI-generated. We believe our best work is human-curated and AI-scripted content isn’t what our clients are paying us for. However, in the midst of a hiring surge, we’ve used ChatGPT to support the writing of our job descriptions. It’s been a successful tool for streamlining our interview and staffing processes and posting positions in a timely manner. At the end of the day, as a best practice, agencies should remain transparent with clients and inform them when they are using AI.”

John McGrane, Director of Brand Communications, Media Matters Worldwide: “First, we must acknowledge and embrace the technologies- they are here for the long run and are never going away, so we can’t put our heads in the sand.

“We used a healthy amount of caution to beware of ethical pitfalls. It must be understood that the information AI tools generate is only as good as the data inputs. Results generated by AI can be biased, unfair and flat-out wrong. In writing the policy for our team, we were (overly) cautious on the first versions. This technology is in its infancy and there isn’t much regulation. It’s better to err on the side of caution to avoid an unforeseen disaster. We spell out acceptable and unacceptable real-life uses. Since then we’ve actively updated our policy frequently. AI tools improve by leaps and bounds daily and things will continue to change very quickly.”

Gary Carruthers, managing director and founder, Underwaterpistol: “To create our AI usage policy, we researched what other organizations were doing, then adapted these policies to our needs and values. Within our agency, the use cases of AI tend to be research, ideation and code validation (all are still human-checked/researched/implemented), but we were careful to acknowledge any potential risks. By taking a flexible approach, we’re putting the organization in a better position to uphold innovation while fostering trust and ensuring responsible implementation.”

Vic Drabicky, founder and chief exec, January Digital: “Our policy on AI use is a “Test freely, use cautiously, always disclose”. We always encourage our teams to test new tools and, specific to AI, have a task force that is training our entire staff on innovative new uses. That said, AI is a new technology and has all sorts of flaws, hiccups, and limitations too, so it’s important to be cautious. We ask our teams to be upfront when they test and clearly outline what they learned, the strengths, and the weaknesses they experienced.”

Dan Peden, product director, Journey Further: “Right now as an agency, we’re encouraging exploration and play with AI across all our functions. Everyone has access to Bard, Midjourney, ChatGPT as examples and we’re actively sharing the results across our agency.

"Where we would eventually put a framework in place would revolve around risk. We’re not dealing with critical systems or sensitive data but where we are dealing with any data we’d urge caution and human sense checking. Hallucination within AI is very real – we don’t want to replace humans with made-up content, only supplement them, making them more efficient.”

Is your agency taking a different approach? If so, let me know. My email is sam.bradley@thedrum.com.

Artificial Intelligence Agency Advice Agency Culture

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +