It’s time for a positive global conversation on the future of AI
Daniel Hulme, chief AI officer WPP, explores how society can tap into AI effectively, efficiently and ethically as the UK’s AI Safety Summit 2023 kicks off.
At the event, UK prime minister Rishi Sunak will be joined by international governments, AI companies, civil society groups and academia for a global event on safety in artificial intelligence. This event – the first of its kind – will focus on frontier AI systems, shorthand for the latest and most powerful tools that push the boundaries of AI’s capabilities in a way the world has never seen before.
As our industry – and society at large – taps into the huge potential of these systems, there is no denying that there should be urgency around how they can be tested and monitored to ensure they do not cause harm. This will be the summit’s focus, and rightly so, but there are some important considerations for us all to keep front of mind.
Firstly, there’s a big difference between AI and humans. Humans have intent; AIs do not. The intent needs to be scrutinized from an ethical perspective, and there are already scrutinized standards for this. We must not confuse ethics with AI safety. To make AI systems safe, we should make algorithms explainable (if they are explainable, they become transparent, auditable and governable), and we should safeguard against systems that could cause unanticipated harm (this is an engineering problem).
Secondly, there is a huge amount of confusion and misinformation around AI risks.
There are broadly three high-level categories of risk:
Micro risks, which cover the risk of poorly implementing AI in organizations. As mentioned above, there are already well-established mechanisms to mitigate these risks: governance, explainability and engineering.
Malicious risks, which are the primary focus of the AI Safety Summit. These are risks such as cyber-attacks, the creation of lethal pathogens and the deliberate spread of misinformation to destabilize states and economies.
Macro risks, such as creating a post-truth world or surveillance states, technological unemployment and superintelligence.
Suggested newsletters for you
Understanding the intended applications of AI technologies is perhaps the most instructive way of thinking about AI and its rapid evolution that we see around us. We think of them as:
Task automation. Replacing repetitive, structured mundane tasks with narrow (and often simple) algorithmic (or robotic) technologies.
Content generation. LLMs enable the creation of generic text and images (soon sound and video). The battleground isn’t creating generic content; it’s creating brand-aligned, production-grade, differentiated content.
Human representation. Using AI as an interface that looks and behaves exactly like a human.
Insight extraction. This is what people (incorrectly) called AI before LLMs. It uses machine learning, statistical analysis and data science to extract insights from data. What is powerful about these technologies is not the ability to predict; it’s the ability to explain those predictions.
Complex decision-making. If you’re old enough, this is called ‘operations research.’ It’s a completely different field in computer science that involves logic, discrete mathematics and optimization.
Human augmentation. Using cybernetics and exoskeletons to make ourselves faster, stronger and better. Or even creating digital twins (avatars or copilots) of ourselves that can be used for augmented decision-making.
Obviously, the above use cases come with both opportunities and risks. But by categorizing in this way, we can work on improving technologies, skills and processes to help us solve problems more effectively and, importantly, understand the framework for a safe and ethical future for AI.
We previously spoke to Hulme here for The Drum Live. He explained how brands and agencies have a long way to go to master AI. Check out more of The Drum's coverage of the AI revolution in marketing here.