Artificial Intelligence AI Technology

Governments step up efforts to regulate AI – but there’s still a long road ahead

Author

By Webb Wright, NY Reporter

November 3, 2023 | 7 min read

While they fall short of creating any regulatory laws around AI, Biden’s new executive order and the Bletchley Declaration signal clear governmental intentions to impose and enforce strict guardrails.

Image

The UK hosted the AI Safety Summit earlier this week in Bletchley Park. / Adobe Stock

It has been a big week for policy surrounding artificial intelligence (AI), a technology that’s caused a seismic shift within tech, marketing and other industries over the past year.

On Monday, President Biden signed a sweeping executive order mandating that companies building advanced AI models share the results of their safety – or “red-teaming” – tests with the federal government. This marks a significant change for the AI industry; up until the signing of the new executive order, all such transparency with the federal government was carried out on a voluntary basis.

The new mandatory measures – which aim, for example, to prevent nefarious actors from using AI to develop biological weapons – “will ensure AI systems are safe, secure, and trustworthy before companies make them public,” the White House wrote in a fact sheet describing the new executive order.

Running just under a lengthy 20,000 words, the executive order is based on the Defense Protection Act, a US law passed in 1950 at the beginning of the Korean War that enables the president to expedite industrial production in the face of a national security threat. In addition to requiring greater federal oversight of the training of AI models within the private sector, the order also calls on the National Institute of Standards and Technology to create an AI safety testing framework, urges the Federal Trade Commission to step up its consumer protection efforts, eases Visa requirements for foreign AI experts looking to work in the US, and initiates an array of other steps geared both towards promoting the safe deployment of AI and positioning the US as a leader in the regulation of the technology.

While many experts agree that it represents a significant step in the direction of imposing reliable guardrails around AI, Biden’s new executive order is not a law; only Congress has the authority to pass legislation (and that process, much more often than not, takes a very long time).

“More action will be required, and the Administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation [of AI],” the White House fact sheet reads.

What does this mean for marketers using AI?

The new executive order is unlikely to have any immediately noticeable impacts upon brands that are using AI in their marketing efforts since it mostly highlights the need for future regulation. Still, it could signal potential changes lying not too far down the road.

For example, the order states that the federal government will begin cracking down on “commercially available information containing personally identifiable data.” Such measures “could affect data brokers and others in the data market, like adtech platforms, by shrinking the market,” says Cameron Kerry, visiting fellow at the Brookings Institution’s Center for Technology Innovation and an expert in AI policy.

At the very least, according to Martin Adams, co-founder of AI companies Codec.ai and Metaphysic.ai, the new executive order should prompt brands to reflect more closely and carefully on their AI strategies.

“I don’t see this being an on- or off-switch for specific AI tools that marketers are using today,” Adams says. “But I do think it will create pressure and the right sort of questions in the buying and procurement process: What data are you using? Does this approach have the potential to create bias and lead to discrimination through its outputs? Have you approached data-gathering and model-building with privacy and consent front and center?”

The executive order – by signaling a legitimate interest within the federal government towards AI safety – could also prompt some marketers who have previously been reluctant to use AI to view the technology in a more favorable light.

“I think the impact [of the executive order] on the advertising and production industries will be largely positive,” says James Young, BBDO’s head of digital innovation for North America. “I can see this being ultimately recognized as a stamp of approval to expand implementation of AI [within marketing].”

The Bletchley Declaration

Just two days after Biden signed the new executive order on AI, the government of UK Prime Minister Rishi Sunak released a document dubbed the Bletchley Declaration, signed by representatives from the 28 countries – including the US and China – that were attending this week’s AI Summit in Bletchley Park. (The location is famous for being the site where the mathematician Alan Turing, widely regarded as the father of modern computing, broke the Nazis’ Enigma code, helping to bring about the end of the Second World War in Europe.)

Like his American counterpart, Sunak has long been eager to position his country as a global leader in AI R&D and regulation.

The Bletchley Declaration acknowledged that AI presents the risk of “serious, even catastrophic, harm” to humanity, and – while it did not establish any concrete, specific goals for mitigating that risk – attempted to establish a framework for international cooperation aimed at safeguarding the technology and ensuring its responsible deployment.

“Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation,” the Declaration states. “We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI.”

According to Forrester vice-president and analyst Martha Bennett, the AI policy initiatives emerging from the UK over the past week were primarily symbolic – but they nonetheless represent a step in the right direction. “The [UK] summit and the Bletchley Declaration are more about setting signals and demonstrating willingness to cooperate, and that’s important,” she says. “We’ll have to wait and see whether good intentions are followed by meaningful action.”

For more on the latest happenings in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.

Artificial Intelligence AI Technology

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +