AI-powered marketing needs interpretability – and collaboration
The business world is gearing up for an artificial intelligence (AI) revolution, as the budding technology is expected to grow into a $118.5bn industry by 2025.
Many organisations are already embracing AI’s predictive abilities to automate repetitive tasks. For example, AI allows systems to process increasingly more complex types of input data, like natural language and images. Handwriting recognition uses image recognition to eliminate boring data entry tasks. The Alexa assistant you have at home can understand the intent of your request from the soundwaves of your voice and answer appropriately – and for much cheaper than a real-life assistant.
While computers have obvious strengths, the ability to draw on gigabytes of data for example, humans have many unique capabilities which machines currently struggle with. The natural ability of humans to communicate and collaborate currently dominate that of machines, and this is particularly important in situations where humans and machines must work together, such as where a human has a legal duty to understand outcomes, e.g. law or medicine. Other knowledge work, where human decision making cannot be replaced, but may be augmented by machine insight, poses similar challenges for current AI systems. This inability to collaborate is holding back wider AI adoption and innovation.
The inability to collaborate is holding back wider AI adoption and innovation
Marketing and AI
Let’s consider this problem through the lens of marketing, a normally tech-savvy industry which, in a lot of ways, is being surprisingly slow to innovate with AI.
The latest marketing news and insights straight to your inbox.
Get the best of The Drum by choosing from a series of great email briefings, whether that’s daily news, weekly recaps or deep dives into media or creativity.Sign up
Automation is already being used very effectively for repetitive tasks such as ads targeting and bidding. The real-time matching of adverts to viewers is facilitated by machine intelligence at some of the most innovative companies in the world, dynamically predicting click through rates and optimising ad selections on the fly.
But when it comes to more strategic, creative decisions – such as what ad creatives the team should use – AI struggles to be as effective because it can’t easily explain it’s decisions.
To put it bluntly, AI does not seem to be a very good teammate.
The black box of AI
Normally, the better at predicting an algorithm is, the more unintelligible it’s workings are. This lack of interpretability is what we call the ‘black box’ of AI and it has deep implications for how marketers stand to benefit from the technology.
For example, imagine that you had an AI model which was trained to predict whether your next ad was going to appeal to the audience that you'd selected for it. This could be useful, say, to show to the client that the creative treatment is a winner in an objective way, or to test new creative elements quickly and cheaply. But if the model says your creative is going to underperform, but can't explain why it's making that judgement, how are you going to make the ad better?
Content creation is expensive and time-consuming, so creating 100 ad creatives in the hope that you will find the best-performing one is inefficient. The black-and-white results which machine learning algorithms tend to provide isn’t enough for marketers to glean any actionable insights.
What if AI was interpretable for marketers?
Data-driven systems such as AI can be seen as existing on a scale from descriptive – discussing the past –to predictive – a system that can understand the future – and finally prescriptive – understanding how to change the future.
An AI system which could explain its decisions and outline what the user could change to get a better result would be a prescriptive system and would be a huge step toward finding a common language between human and machine.
That said: it’s really hard! Current interpretability methods are more aimed at researchers than business users. So what can we do to make AI more collaborative?
For AI to be a part of creative marketing decision-making, measures need to be made to combine the analytical with the artistic. This means finding a common language between man and machine, so that users can actually action suggestions from the AI.
At Datasine, we’ve achieved this by teaching our AI platform, Connect, to understand human and social terms like “the image looks busy” or “that font is old-fashioned” or “this text is quite formal” – the way humans actually talk about creatives. By understanding these concepts at a scale that humans would find very difficult to process, Connect can understand how each of them drive ad-performance from studying past interactions, and communicate this back to the user. That means that the user can finally open up the ‘black box’ and see why the algorithm made the decisions it did and understand them in human terms.
When AI can understand human terms – and explain its decisions using them – marketing teams and businesses can use it for creativity and innovation, with the data-backed knowledge that their creative choices will be effective and profitable.
Content by The Drum Network member:
Datasine personalises how brands communicate with their audience using AI, psychology. 🚀 Sign up today: https://t.co/bUzdZbKJFuFind out more