Digital Transformation Artificial Intelligence Data Deep Dive

There’s a regulatory reckoning coming for AI’s use in digital advertising

By Kelly Anderson, senior vice-president and head of data privacy and compliance

May 9, 2023 | 8 min read

As part of The Drum’s latest Deep Dive, The New Data & Privacy Playbook, Emodo’s Kelly Anderson looks at how the boom of AI is presenting new privacy conundrums for the advertising industry.

Scales of justice

/ Adobe Stock

With the clock rapidly running down on the digital industry’s reliance on third-party cookies, advertisers and publishers are rightly focused on how ad campaigns can scale – without wasting spend – in 2024 and beyond. There’s been a great deal of talk – and action on the product development front – related to artificial intelligence (AI) and machine learning, and AI’s capacity to draw scalable predictive insights about consumer behavior from limited, privacy-compliant datasets.

But we need to remember AI surfaces new privacy issues for advertising, too. The use of AI in advertising is indeed under scrutiny right now by regulatory bodies around the world, including the Federal Trade Commission (FTC) in the US, Office of Privacy Commissioner of Canada (OPC) and the European Union’s AI Act.

Industry stakeholders should pay close attention to these regulatory developments as they may impact the data used by these AI solutions.

All eyes on AI chat applications

Currently, generative AI program ChatGPT – and capacities many users have found eerily prescient – has also attracted interest from regulators.

ChatGPT and similar tools require data input – including, in some cases, data that could qualify as personally identifiable information (PII). Using PII for business purposes, including advertising, could result in revealing and processing information about a user’s protected class status – which in turn easily has the potential to run afoul of privacy and fairness laws.

In March, the Italian data protection authority Garante banned the use of ChatGPT in the country in order to investigate owner OpenAI’s legal basis to collect data used to train the chatbot models. OpenAI must respond and provide remedies or risk the high fines of GDPR breaches. The company‘s AI-powered solutions, which have a tendency to lack transparency, need to uphold principles such as truth in advertising, the barring of discriminatory targeting practices and user privacy through the stages of collecting and processing the data that drives AI.

The US Fair Credit Reporting Act (FCRA) is intended to protect sensitive consumer information from being shared in unauthorized manners. If a business runs an algorithm that results in the consumer being denied credit, housing, employment or other services, the action could invoke FCRA.

Meanwhile, in the EU, the proposed AI Act aims to assess risk levels in AI applications. While it’s early to say comprehensively how the bill will assign risk levels to all AI use cases, it’s pointed to ChatGPT and other generative AI tools as a particular risk area, because consumers may mistake its output for actual human-created content. EU regulators are certain to keep the privacy risks of tools like ChatGPT and Google’s Bard front of mind while developing the legislation.

Stopping accidental bias before it starts

While regulators are currently looking at the possibility that AI could incidentally produce inherent bias against marginalized groups – by producing inaccurate results or unfairly limiting access on account of a flawed algorithm – there’s also a pressing need to look at the data being put to use behind the scenes.

Transparency in data collection and use in algorithm-driven chatbots is an area of concern. It takes a lot of data to train models to behave fairly and accurately. Current regulatory frameworks, as they stands today, have not yet given adequate attention to issues around ‘black box’ inputs. Many of these inputs – which involve pulling data from any number of sources – include data that may have been anonymized in accordance with the law. But even so, ‘anonymized’ data may still reveal information that could violate users’ privacy, or otherwise present unfair and prohibited marketing or advertising outcomes. An AI application could even inadvertently undo the effects of anonymization by combining sources that could violate these privacy regulations, for instance.

We need to take a closer look at the types of data sources AI can rely on. In the digital advertising ecosystem, sample data is routinely combined with data from other publicly available sources to create datasets large enough to reliably predict scenarios that determine the relevancy of an ad campaign to a user.

This can impact automated decisions about the offers – credit worthiness, for example – for which the user is considered qualified or not qualified. In other words, these decisions are a matter of inclusion and exclusion, based on predicted relevancy.

Section 5 of the FTC Act and the UK ICO Data Protection Act are examples of regulations that address fairness and equal opportunity. They require data to be used in ways that individuals would reasonably expect to be used – including fairly and transparently. For example, AI applications cannot be trained, even unintentionally, to have racial biases.

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

Privacy action is required before 2024

AI can and does deliver great value for the ad industry, and we should expect that a growing number of businesses will find advertising use cases for it. We should also expect new, but reasonable, levels of regulatory oversight to help stem bad outcomes, while enabling innovation and business growth.

Any business in advertising that has not already brought in privacy and regulatory experts needs to bring them in immediately. They’ll need to inspect not only the data inputs used in modeling to enable AI predictions, but all the scenarios that post-processed data might achieve, and the fairness of those scenarios to impacted users. Business stakeholders must raise these concerns to the highest level of management in order to address privacy and compliance across the entire organization, especially where AI is or will soon be put to use.

Kelly Anderson is senior vice-president and head of data privacy and compliance at Emodo. To read more from The Drum’s latest Deep Dive, where we’ll be demystifying data & privacy for marketers in 2023, head over to our special hub.

Digital Transformation Artificial Intelligence Data Deep Dive

More from Digital Transformation

View all

Trending

Industry insights

View all
Add your own content +