Artificial Intelligence AI Data and Insight

Marketing data: 3 serious risks of using AI that brands need to be aware of

Acxiom

|

Open Mic article

This content is produced by a publishing partner of Open Mic.

Open Mic is the self-publishing platform for the marketing industry, allowing members to publish news, opinion and insights on thedrum.com.

Find out more

May 22, 2023 | 7 min read

Brands are scrambling to experiment with artificial intelligence (AI) and understand its new potential for understanding, reaching, and engaging customers, as well as the risks it brings with it

Here, Brady Gadberry, SVP, head of data products at Acxiom, has three pieces of cautionary advice for brand marketers as they embrace the new technology.

It seems like almost every day, a new product launch or announcement reignites excitement (and not a small amount of trepidation) about the possibilities of AI for marketing – and just about any knowledge industry.

The rise of large language models (LLMs) like ChatGPT are at the center of this disruption, and many are calling for a pause on the training of AI systems more powerful than GPT-4. The speed of disruption, combined with our collective tendency to let innovation outpace the laws, ethics, and governance controls we have in place, have added urgency to this call for pause. Signatories of the cautionary open letter include Acxiom CEO Chad Engelgau, who has warned that “society needs levers and fulcrums to create better scalability” to protect against the risks of AI “hallucinations” and potential privacy issues.

Like most, I’m very excited by the possibility of making it easier for less-technical people to have a way to intuitively interact with and benefit from data and analysis. However, as the growing power of LLMs becomes apparent, I want to share some of the misgivings we’ve heard in recent weeks and explore these issues further to see what they mean for data-driven marketers. Where should the line between excitement and caution lie, and how can brands use AI like LLMs to build a better understanding of their customers while minimizing the risks?

Choose your applications of AI carefully

As Chad noted in the article I mentioned, ChatGPT and similar tools can be highly effective in certain areas, like short-form marketing creative. But the price of scalability and speed is often imprecision (or flat-out fabrication) in the content itself, so human guidance and review are still going to be essential.

This issue will be a huge consideration for brands experimenting with AI. To avoid the pitfalls and limitations of AI’s effectiveness, first we have to understand them.

You can use LLM-based AI to help you combine and interpret datasets and make inferences based on correlations in the data. For example, AI is great at answering questions, individually and at scale, that require an understanding of relationships between words, concepts, and data that don’t ‘match’ or ‘join’ in datasets, but that people intuitively understand. Is a business open 24 hours? If it’s a fast-food restaurant with a drive through in an urban area, that’s quite likely. If it’s an accounting office, likely not. With large sets of data about businesses, it takes people manipulating the data and getting it organized in useful ways to calculate those answers. With these new tools we can skip to the good part, with less of the work to ready the data for analysis.

But here’s where things get tricky. Yes, AI can do some of the “thinking,” and can remove the need for human rote work, but you wouldn’t want it to make strategic decisions about the marketing challenges you face, or the ways in which you’ll use the data to interact with people. Consider in January of this year when The Verge revealed that many of CNETs recent stories had been written partially or entirely by an unnamed AI program. Many were straightforward financial reporting, but in the event of an ‘AI hallucination,’ numbers or even trends could have been reported incorrectly. At the time, those stories were not identified as having been AI-generated, so the level of scrutiny necessary wasn't clear to the reader. For now, it’s still a step too far to allow unsupervised uses that directly impact your brand and its relationship to your customers.

Transparency will be critical – and difficult

The risks around AI decision-making don’t just involve ‘hallucinations,’ but also are born from the opacity of the decision-making process itself. We have to ensure there’s transparency around how and why a decision was made. This is a major factor that will set apart the brands who use data ethically.

The ‘black box’ nature of AI systems has been discussed by experts like Brian Christian in his book The Alignment Problem: Machine Learning and Human Values. The more sophisticated a model gets, with more types of data coming together to make extrapolations, the more opaque the system becomes. If they’re not careful, brands could find themselves making campaign decisions without knowing why they’re doing it. This becomes a major issue when biases creep in, especially when datasets could include information about protected classes like race, gender, and sexual orientation.

The inclusion of protected classes isn't the only way bias can be introduced into a decision. All AI products are based on training data, and expecting a perfect outcome assumes the AI was shown previous examples of marketing decisions that were both entirely unbiased and successful. As marketers, we know that while that happens, it's not necessarily true in every or even the majority of cases.

As the complexity of AI systems increases, the difficulty in defending against biased decision-making grows exponentially. You don’t just have to worry about sensitive data such as protected classes, but also data that can become a proxy for, or highly correlated with, those things. To avoid this situation, you need to understand what’s happening inside your models, and that requires transparency – for the brand and for the customers who may want to know why they’re being made a certain offer. This is possible, but to keep checks and balances on what AI is doing at scale, we need to take measured steps forward. This will help ensure we don’t let technology get ahead of our values by using data in ways we would not allow systems we more-specifically design to do so.

AI is not a substitute for good data – or for good marketing

One mantra I keep coming back to when I think about the excitement (and the fear) around AI is this: AI isn’t magic, it’s just math. Sure, it’s extremely fancy math done very quickly on powerful computers, but ultimately, it’s only going to be as good as the fuel you feed it. And that fuel is data.

If you want to use AI to help you make predictions about customer behaviors, and inform your decision-making, you still need great first-, second-, and third-party data. You need a solid first-part identity backbone so you can gain a complete view of the real people who are your customers. To grow, you still need to access information that helps you predict that someone might be your next great customer and reach them with a message that might introduce them to your new LLM-powered chatbot to find out about your brand.

The fundamentals haven’t changed, and they aren’t going to any time soon. Smart technology doesn’t relieve marketers of the need to maintain good data practices. First and foremost, brands will always need to understand their customers and the customers they’d love to have. AI will augment these abilities, but it won’t be a substitute for good marketing.

Artificial Intelligence AI Data and Insight

Trending

Industry insights

View all
Add your own content +