Microsoft Google Artificial Intelligence

AI regulation is (probably) on its way. Here’s what marketers need to know

Author

By Webb Wright, NY Reporter

June 16, 2023 | 10 min read

Lawmakers and industry leaders are increasingly calling for government oversight of the technology, which some experts have said could pose an existential threat to humanity on par with pandemics and nuclear weapons.

Image

If passed, the EU’s AI Act would severely limit the uses of AI-powered facial recognition software. / Adobe Stock

Around the world, governments are advancing plans to regulate artificial intelligence (AI) – a technology that in recent months has undergone a kind of quantum leap in terms of sophistication and social impact. While still in its very early stages, such prospective rules could affect not only the powerful tech companies building and commercializing AI, but also the brands and individuals using it.

Europe has taken the largest strides in efforts to establish new AI regulations. On Wednesday, the European Union passed a draft of its AI Act, a law intended to impose limits on the use of AI – notably facial recognition software – and to enforce transparency from AI companies. A final version of the law is expected to pass later this year.

Progress has been a bit slower in the US. Still, both the public and private sectors in the country are slowly but surely beginning to embrace the idea of regulating AI. In October – shortly before the release of ChatGPT, the text-generating AI model that has been largely responsible for sparking the current global conversation about the need for AI safeguards – the Biden White House published its ‘Blueprint for an AI Bill of Rights,’ a document drafted in an effort to “guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.”

Some AI industry leaders have more recently begun to openly urge governments to pass new laws to impose guardrails on AI. Last month, for example, OpenAI (the Microsoft-backed company behind ChatGPT) chief executive Sam Altman testified before Congress, telling lawmakers that “if [AI] goes wrong, it can go quite wrong,” and that his company was committed “to work with the government to prevent that from happening.”

Last month, Altman and a cohort of other AI leaders signed a 22-word open letter claiming that AI poses an existential risk to humanity and that mitigating such risk “should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

“It’s a significant phenomenon when you have a number of CEOs of companies saying, ‘Yeah, there should be regulation of our industry,’” says Cameron Kerry, a visiting fellow at the Brookings Institution’s Center for Technology Innovation and an expert in AI and information technology. “That’s a pretty unusual event.”

Some have drawn lessons from social media, which today remains largely unregulated. In a recent op-ed published in The New York Times, author and historian Yuval Noah Harari and Center for Humane Technology co-founders Tristan Harris and Aza Raskin argued that global society’s ongoing inability to regulate or rein in the psychologically harmful and politically divisive impacts of AI spells bad news for the new wave of more advanced AI models like ChatGPT. “Social media was the first contact between AI and humanity, and humanity lost,” the authors wrote. “First contact has given us the bitter taste of things to come.”

A unified AI agency v ‘hub-and-spoke approach’

Opinions about how to best develop and deploy AI regulation in the US differ sharply.

Altman, for one, has advocated for the formation of a new government agency devoted specifically to the oversight of AI. But University of Florida law and engineering professor Barbara Evans says that the creation of new regulatory agencies ”requires a level of consensus that probably can’t be achieved in today’s fractious [political] environment.” But “even if there were a consensus,” Evans argues, “forming a single AI oversight body probably isn’t a good idea.”

“Policymakers and members of the public tend to speak of ‘AI’ as if it were a single, unified phenomenon,” she says. “In reality, ‘AI’ refers to thousands of computational tools that will be deployed in a wide variety of vastly different settings, each posing different risks and offering different benefits to society.” As a result, Evans argues, government regulation of AI "needs to be custom-tailored to the precise setting where the AI is deployed.”

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

Rather than a single AI-focused agency, Evans suggests that it may be more effective to “mobilize all the existing US federal agencies to oversee the use of AI within the specific industries they already regulate, and develop some clear instructions on who is doing what and how to share responsibilities among them.”

Evans’ views echo those submitted earlier this week by Google – a company that, alongside Microsoft, has emerged as a frontrunner in the race to commercialize new AI models – in a filing to the White House, which was first reported by The Washington Post. “At the national level, we support a hub-and-spoke approach – with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators overseeing AI implementation – rather than a ‘Department of AI,’” the Google filing stated. “AI will present unique issues in financial services, healthcare, and other regulated industries and issue areas that will benefit from the expertise of regulators with experience in those sectors – which works better than a new regulatory agency promulgating and implementing upstream rules that are not adaptable to the diverse contexts in which AI is deployed.”

How can marketers prepare?

The marketing industry – which relies heavily on data in order to be able to identify and reach target audiences – has been quick to embrace the new wave of AI broadly, and specifically generative AI (that is, models like ChatGPT that are trained on vast datasets to create new content based on user inputs).

In fact, a kind of AI gold rush has emerged within this industry over the past several months; brands from across a broad swath of industries have been experimenting with these tools in an effort to provide their audiences with, for example, more personalized, automated customer service experiences and opportunities for artistic collaboration.

At this point, it remains to be seen whether regulation of AI will materialize in the US, and, if it does, how that might impact brands‘ ability to use the technology. The best thing that brands using AI can do at the moment, says UFL’s Barbara Evans, is “stay abreast of developments in the law.” She also suggests that any brand that has a stake in the future of AI policy should take advantage of “opportunities to comment on proposed regulations because people in the industry have the best insight into how a proposed new regulation could affect their businesses.”

Mike Kaput, chief content officer at the Marketing AI Institute, says that brands that have a vested interest in AI should actively be educating their employees about this technology – irrespective of whether or not the government steps in with regulations. Brands, he says, “need to sit down and draft an AI ethics and usage policy of some type … You need to start telling your employees how they should and should not be using these tools, because regardless of what regulation happens when, there are significant opportunities and risks … people don’t know what they don’t know. And a lot of this stuff is very, very new for teams and employees.”

Kaput adds that he’s watching developments in the EU and in the US closely, but for the time being, the onus of responsibility falls on brands to ensure the safe usage of AI – “which sucks, but that’s the way it is right now.”

For more on the latest happening in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter here.

Microsoft Google Artificial Intelligence

More from Microsoft

View all

Trending

Industry insights

View all
Add your own content +