The Drum Awards Festival - Extended Deadline

-d -h -min -sec

Artificial Intelligence Agency Models Technology

AI Safety Summit: what marketers need to know


By Sean Betts, Chief product and technology officer

November 7, 2023 | 8 min read

Sean Betts, chief product and technology officer of OMG UK, shares his lessons from the UK’s AI Safety Summit.


/ UK prime minister

It can be challenging to identify significant historical moments as they unfold.

But we don’t need the benefit of hindsight to recognize the profound significance of the AI Safety Summit, which took place in the UK last week.

Cutting through some of the sensationalist headlines surrounding the summit and the alarmism about existential risks posed by ‘frontier AI’ models, the ‘Bletchley Declaration’ stands out as the most sensible expression I’ve encountered from any government regarding the societal challenges stemming from the meteoric rise of generative artificial intelligence technology. Impressively, it has garnered commitment from 28 nations to collaboratively address these challenges, including both the US and China, which, as an achievement in and of itself, cannot be overstated.

Powered by AI

Explore frequently asked questions

Add to this the agreed recognition that both governments and AI companies have a crucial role in testing the next generation of AI models to ensure AI safety – both before and after models are deployed - and we have a landmark moment. It marks a move away from responsibility for determining the safety of frontier AI models, sitting solely with the companies. This self-regulation error saw the proliferation of some of the worst aspects of social media.

Going into the AI Safety Summit, it was somewhat unclear what was being included under the incredibly broad term of ‘AI Safety,’ but the outcome - the Bletchley Declaration - provides a comprehensive picture of what the 28 national governments deem important to address.

It was refreshing to see the spotlight thrown on some of the more immediately pressing concerns associated with generative AI, with topics such as transparency, explainability, accountability, fairness, bias, and privacy and data protection being given due prominence. Many civil society members and notable experts have long been highlighting these concerns. Still, in the lead-up to the summit, there was a danger that these very real issues were going to be overlooked in favor of the headline-grabbing ‘existential’ risks of AI with comparisons made to the pandemic and nuclear war.

The ‘existential risk’ narrative is one that serves the self-anointed frontier AI companies well but, unfortunately, detracts from the more real and pressing concerns that those in marketing will need to address when using generative AI technology in their work as we move towards 2024.

Next year promises exciting advancements for generative AI in marketing. While 2023 has been filled with hype and rapid development, 2024 promises the rise of more enterprise-level generative AI applications. Marketers will be able to harness these tools at scale more readily, and we’re already seeing the first signs of this with several new generative AI features having been announced recently by all the big digital players, such as Meta’s generative text variation feature and Google’s ad generation and recommendation tool. These generative AI tools will be incredibly powerful for marketers, but with great power comes great responsibility.

The Bletchley Declaration has seen governments and AI companies step up, and we, as marketers as users of generative AI applications, also have the responsibility of ensuring that some of the very real and immediate issues around generative AI technology are addressed. For example, we can’t let the inherent bias in many generative AI models set the marketing industry back on the recent strides we’ve been making to ensure our work is more diverse and representative of all walks of life.

Additionally, there’s a dual need for transparency – marketers should be candid about their use of generative AI with consumers and simultaneously demand transparency from the AI models that they use. Questions like what data trained these models, whether the data was consented to, and the comprehensibility of model decisions are essential.

The IPA and ISBA have made a great start on helping marketers with these challenges with last week’s launch of their twelve principles for using generative AI in advertising. Still, it’s now up to the marketing community to follow these guidelines and to ask the right questions of generative AI technology providers.

I’m 50% excited, 50% anxious about the impact generative AI technology is going to have on society and specifically marketing, and I still think this is a very healthy outlook for everyone to have.

Suggested newsletters for you

Daily Briefing


Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week


See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

Nothing announced at the AI Safety Summit has changed this point of view, but I do find comfort that global governments are taking serious, collaborative steps to better research and understand the issues surrounding generative AI technology and that those efforts won’t be distracted by the fear-mongering around existential risk.

Just as Bletchley Park will be forever synonymous with societal gains through its pivotal role in wartime code-breaking, the Bletchley Declaration will be synonmous with the protection of society in this brave new AI world.

Artificial Intelligence Agency Models Technology

More from Artificial Intelligence

View all


Industry insights

View all
Add your own content +