Good Artificial Intelligence Regulation

Sooner or later, AI will be regulated. What does the ad industry want from government?

Author

By Sam Bradley, Journalist

July 20, 2023 | 11 min read

Agencies and industry bodies are closely monitoring government regulation of AI tools. But if legislation takes too long, should adland move ahead with an AI ethics regime of its own design?

US Capitol

The US Congress may establish a commission to investigate AI regulation / Unsplash

The use of AI will, sooner or later, be regulated. That’ll affect any agency using AI tools, whether for early experiments or on actual client projects.

But there are already significant differences in how different governments treat AI. While the EU is taking a “prescriptive” approach in its AI Act, the British government has indicated it will take a laissez-faire stance – and the Japanese government, presiding over the third largest ad market in the world, has said it considers all IP ’fair game’ for AI training, waiving its power to enforce copyright law.

In the US, where the White House has released a framework document dubbed the ‘AI Bill of Rights,’ existing copyright law may end up governing how AI can be deployed, while a bill introduced in June may end up establishing a Congressional commission to investigate AI rules.

Agency AI ethicists and leaders at agencies in the UK, US and Europe hope that future regulation in those markets guides practitioners toward responsible use of the tech – and helps guard against some of the negatives it might hold for their businesses.

What do agencies want from regulators?

Nilesha Chauvet, managing director at British agency Good, says: “The outcome we’d want to see is more consensus in the sector about the practical application of AI, more discussion about its value and purpose, and its role versus the authentic role of creativity.”

She says she’d want to see guardrails that encourage only using AI tools where necessary to preserve ‘authentic’ creativity, transparency when doing so, as well as sensitivity around the use of data and fact-checking content created by generative AI tools. “We’re very aware there is a crisis of creativity in our sector. It’s becoming harder to recruit the kinds of talent we want, particularly creative talent. For us, it’s really important we have transparent conversations and give credit where credit is due.”

Transparency – in practical terms, a system that would force publishers or advertisers to label when AI tools have been used to manipulate an image, for example – and the integrity of the underlying datasets behind AI tools – such as the now ubiquitous large language models (LLMs) behind tools such as ChatGPT – are the most common concerns among agency leaders.

But many are determined not to wait for regulators. For Media.Monks, exposure to copyright suits relating to training datasets or other legal actions mounted against AI developers is a concern in the present, not the future. “We’re already seeing that in the tech, and on any output we train on our own material, we’ll be indemnifying as well,” says Wesley ter Haar, executive director of parent company S4 Capital.

In the absence of regulation in the UK, Good – which focuses on charity and third-sector clients – has published an AI charter containing guidelines on the areas Chauvet’s team think are most critical.

Chauvet suggests that industry bodies should “step up” to fill the void currently left by government decree. “If the UK imperative is that we need to regulate this at a sector level, that puts pressure on the bodies. They need to establish some frameworks to allow better conversations for agencies.”

Karen Baker, the founder and president of Boathouse, an American healthcare agency that is invested in AI capabilities, says agencies must participate in the debate around AI. “This industry needs to be in the conversation that notes what is working and what is not in practice and experimentation,” she tells The Drum. “As we build our AI stack for clients, our focus remains on being transparent and providing explanations, not distributing a false narrative. As marketers, we want to remember that human intelligence collaborating with artificial intelligence will keep us in a place where ethics, inclusivity and safety remain of the utmost importance.”

Not every agency has faith in the power of adland consensus, though. Isabel Perry, vice-president of emerging tech for Dept, which is based in the Netherlands but operates in the US and the UK, says: “I personally just think that the government should be setting the agenda and we should be falling in line. If it’s helpful to have technical experts specifying the implementation, that’s fine. But I would prefer that the line was drawn by people who didn’t have business interests.”

Lukas Stuber, the agency’s digital ethics ambassador, agrees. “This needs to come from the government. Nevertheless, we should move faster. We should wait until ratification but orient ourselves along those lines that governance are beginning to draw right now.”

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

Industry bodies say regulation too early

In contrast, Chris Combemale, chief executive officer of the UK’s Data & Markeing Association (DMA), says he supports the UK government’s current hands-off position. “Our position on AI regulation mirrors our views on data privacy regulation – as they are inextricably linked – where innovation and growth must be balanced with respect for privacy so people can trust the data-driven digital economy. Their approach of building on existing regulatory expertise and legislation to create guidance, develop community relationships, and regulate how AI is used in each sector will work well.”

He adds that criminal law should provide a decent framework to combat potential AI harms. “AI that supports criminal activity, such as fraud, must be investigated by the police and prosecuted under existing criminal law.”

That position is broadly shared by Ashwini Karandikar, executive vice-president of media, technology and data at the American Association of Advertising Agencies (4A’s). “We all think it’s a little too early to regulate,” she says.

She argues that the known potential harms associated with AI – sample bias, IP theft, fraud – should be contained by existing legislation regarding discrimination, data privacy, copyright or criminal fraud. “We don’t know all the use cases,” she says, pointing out that regulation put in place too early could prevent the industry from accessing the benefits of AI. “If I never have to crunch another Excel sheet, I would love that.”

In lieu of federal or state-level regulation in the US, the 4A’s is pointing agencies towards guidance provided by the National Institute of Standards and Technology (NIST), a federal agency that published the AI Risk Management Framework earlier this year. The guidance aims to help practitioners design and develop better AI systems that consider the various harms the tech can unleash.

How much space should governments leave?

Given the difference between the EU and UK government’s initial positions on AI regulation, a gulf will likely open up between the two markets. Combemale warns that the “more prescriptive” European approach could engender “create additional administrative burdens for businesses if the ultimate approach differs too greatly.”

He adds that businesses operating across the Channel will have to educate their staff on the new regulation to keep to both sets of rules.

The difference highlights a key issue in the debate around the use of AI: how much legroom should governments allow agencies, tech firms and media companies to innovate? Though higher productivity might generate more wealth (in theory for private owners, workers and the taxman), the potential for harm – for mass unemployment or exploitation – is real.

At Dept, Stuber says caution would be wise. “There is a certain dialectic when it comes to the term ‘innovation.’ There’s one stance that says, ‘As soon as you regulate, innovation will be stifled.’ And then there’s the other, which says, ‘We will not innovate at any cost as long as there is no regulation in place because it will be super unsafe.’ I know clients from both sides of that aisle.

“My personal preference would be that in the EU, UK and US, these societal questions are at the forefront of all these considerations. At the end of the day, business is also part of what we do as a society together. And those things have to be in harmony to some degree. Innovation for innovation’s sake can sometimes be a little dangerous. There has to be an equilibrium between what society needs and what we, as businesses, do. The best-case scenario would be that we have a worldwide equilibrium in that regard. I guy can dream, right?”

His colleague Perry adds: “I don’t dream about regulation, but I’m all for it. I think it’s incredibly important that the EU sets the agenda. And if you look at the way the GDPR has influenced other countries, that’s clearly a good thing.

“The binary conversation about whether or not AI is good or bad… that entire question should be reframed as well.”

Instead, Perry argues that the focus of regulators should be, “What can we do now to help ensure that AI is good, rather than targeting the low-hanging fruits of innovation?”

Agencies operating free from hard rules (for now) will have to be guided by their better angels. The ways agencies come to apply AI tech, Perry says, will be “mirrors of your own values.” Optimistically, Chauvet suggests that this might yield better behavior from the private sector. “You can have loopholes in regulation, but you can’t have a loophole in ethics,” she says.

Good Artificial Intelligence Regulation

More from Good

View all

Trending

Industry insights

View all
Add your own content +