Is AI the solution to privacy-safe targeting in a post-cookie world? Some experts think so
Media, privacy and tech experts explain the promise – and limitations – of AI applications in ad targeting and measurement, as part of The Drum’s week-long deep dive into all things AI and web3.
AI is already helping advertisers relinquish their reliance on individual user identifiers online / Adobe Stock
Jim Cramer, ex-hedge fund manager and host of CNBC’s Mad Money, said in a segment last week that he predicts advertising will be the industry most disrupted by AI – even more than call centers or drive-through restaurants.
Specifically, Cramer cited some programs’ abilities to “write better copy than humans.”
He’s certainly on to something: AI is already being harnessed by advertisers to write and edit copy, develop visuals and supercharge creative campaigns – though it’s worth noting that the degree to which AI will replace versus augment such functions is hotly debated among experts.
“Advertising is one of the industries that is talking a lot about how AI will affect the way they work, from coming up with ideas for ads to creating visuals,” says Parry Malm, chief executive at Phrasee, an AI-powered content platform for marketers.
But the technology also has the potential to help answer another of advertising’s most urgent needs: providing a mechanism for privacy-safe ad targeting and measurement amid widespread signal loss and the deprecation of third-party cookies.
The writing is on the wall
User data privacy – specifically as it relates to targeted advertising – has become a key concern among consumers, regulators and advertising industry stakeholders.
In the US, efforts to enhance privacy have manifested in the form of a slate of new state-level privacy bills – many modeled after the sweeping California Consumer Privacy Act (CCPA), which borrows from the EU’s General Data Protection Regulation (GDPR) fundamental components including opt-in default terms and the right to ‘be forgotten’ by an organization. This year, new privacy laws will go into effect in five US states.
Meanwhile, as progress on a federal-level privacy bill has fizzled in recent months, the US Federal Trade Commission has kicked off a new rulemaking process with the intention to “crack down on commercial surveillance and lax data security practices.”
As regulatory pressure reaches new heights, tech’s biggest players are actively paving a new path for privacy. Perhaps most notably, Google has promised to fully deprecate third-party cookies – a ubiquitous technology that allows data brokers and advertisers to track individual users’ behavior across the web – by next year (a target timeline that has already been twice postponed). In its place, Google is developing a new set of tools that are designed to provide advertisers with effective audience targeting and media measurement capabilities – while providing users with a higher degree of privacy online.
To add to the minefield, two years ago Apple launched AppTrackingTransparency (ATT), a tool that gives users the ability to choose which apps are able to monitor their activity across the web. It created an obvious hurdle for advertisers and publishers by impeding tracking and targeting; in fact, the policy cost Meta an estimated $10bn in ad revenue in 2022 alone. Intelligent Tracking Prevention (ITP), which preceded ATT but has since been updated, is Apple’s web-based privacy technology, which blocks third-party cookies by default and deletes most first-party cookies within a week.
AI to the rescue?
In many experts’ eyes, achieving the desired end of providing advertisers with precise ad targeting capabilities while respecting user privacy will necessarily involve artificial intelligence.
In some of the most straightforward use cases, AI can be used to improve the efficiency and effectiveness of first-party targeting – rather than less private, third-party targeting – strategies. “With the limitations of ATT and ITP, it becomes a lot more difficult to use traditional demographic and third-party psychographic data,” says Michael Scharff, chief executive officer and co-founder of Evolv AI, a firm that helps brands connect with target audiences using AI. “This means marketers need to be more specific about where they advertise, which requires a better understanding of the users they’re targeting. Where AI really helps is informing what’s working on the ad side and then optimizing zero- and first-party data. This information is much more actionable and isn’t subject to the sort of invasive data collection practices of pre-ATT and ITP targeting.”
In essence, Scharff says, advertisers can use AI systems to evaluate different buyer personas using an organization’s zero- and first-party data. “Good AI can act on a lot of data points and the interaction between data points in real-time, so making sure they have access to the right behavioral data is critical,” he says. Personas can then be augmented with additional external qualitative and quantitative research. This data can be used to select the appropriate channels for reaching specific personas. Plus, once a campaign has been deployed, AI can power post-click analytics for further optimization and can help home in on attribution.
Suggested newsletters for you
But AI is being used for privacy-safe targeting in other ways beyond first-party data strategies, too. A handful of firms, like GumGum, are tapping into the power of AI to launch contextually-targeted ads. More specifically, GumGum uses computer vision – a branch of AI focused on training computers to analyze visuals and text – to make assessments about digital environments and place ads at the times and places in which audiences are likely to be the most engaged.
“Our machine learning and AI tools allow us to scale our ability to quickly analyze vast amounts of unstructured data – digital content – with precision and accuracy,” says Lane Schechter, director of product for GumGum’s contextual intelligence platform Verity. “Our AI-trained models are helping advertisers target and drive successful campaigns without the use of the cookie or an ID.”
Schechter envisions that the applications for AI in privacy-safe advertising will only expand from here. “AI will [usher in] the optimization era – everything we do more manually now will be trained to do through AI,” he says. “We envision a world where there is dynamic creative optimization, where AI can take in all the signals from a digital environment – contextual signals and attention signals – and the goals and needs of a brand, and in real time build a creative that aligns with that environment and drives optimal attention.”
More traditional advertising agencies and organizations are also building out their own AI capabilities. Publicis Media, a branch of the multinational advertising holding company Publicis Groupe, for example, has created a decisioning model that harnesses machine learning and natural language processing tools to address a range of issues including mitigating “CPM inflation while enhancing media insights, media performance and brand safety initiatives,” according to Patrick Houlihan, the organization's senior vice-president of decisioning.
And when it comes to preserving user privacy, Houlihan believes AI has far-reaching potential applications. Broadly speaking, he says, AI can aid advertisers and analysts in “finding commonalities in data sets and then launching creative tied to those insights at scale by targeting impressions or keywords versus IDs tied to users.”
Tech titans leading the charge on signal loss across the digital ecosystem – Google chief among them – are also tapping into AI to deliver new privacy-centric targeting solutions. Though Google’s various Privacy Sandbox initiatives are ongoing, the company’s most current proposed cookie alternative is called Topics API. With Topics, a user’s browser learns about them based on their browsing history – and uses these learnings to assign specific interests, or topics, to the user, like ‘Fitness’ or ‘Travel & Transportation.’ The API selects only three topics per user, one from each of the last three weeks, which are automatically deleted after three weeks. These topics, according to Google, marry the organization's proprietary taxonomy with the Interactive Advertising Bureau’s Content Taxonomy V2. For sites that Google has yet to categorize, machine learning algorithms are deployed to evaluate the site’s domain name and make an estimate.
Topics is more privacy-preserving than its predecessor, Google’s Federated Learning of Cohorts (FLoC), in that it inhibits fingerprinting. With Topics, a user can only be assigned topics from a limited pool of options, which makes it more challenging to potentially identify individual users who are assigned to a given topic.
“We believe Topics is a significant improvement for user privacy and that it provides strong tools for publishers, advertisers and advertising technologies to provide relevant content and advertising without having to rely on tracking users across websites,” Vinay Goel, product director at Google Chrome’s Privacy Sandbox, told The Drum in an interview last year. Not only is Topics more privacy-focused, it also improves the accuracy and precision of interest-based advertising, Goel contends. “With FLoC, advertisers and advertising parties had to make their own interpretations of what a website visitor grouped within a particular cohort may be interested in. That left room for interpretation, including possibly misinterpreting the website visitor’s interests. We believe providing topics instead of cohorts provides clearer insight into what advertising categories the website visitor may be interested in.”
In a nutshell, Google is focused on developing machine learning and AI models that can help provide advertisers with ad targeting precision on par with third-party cookies – with no user-level identifiers in the mix.
A balancing act
Of course, while AI can help advertisers, publishers and developers navigate an increasingly privacy-conscious world, its risks are not to be neglected, experts say.
For one, says GumGum’s Schechter, AI and machine learning-based tools “are only as good as the data you train them on.” As such, organizations need to be wary of their data sources and training programs. “Companies need to be very thoughtful and strategic about ensuring the data they are training these models on accounts for implicit and explicit bias and diverse representation.”
Others acknowledge that a core focus for many privacy advocates is the minimization of data collection. And while AI may be useful in reducing the volume of deterministic data that is required to support effective ad targeting and measurement, reducing the volume of data collected “is not alone a sufficient solution,” argues Arielle Garcia, chief privacy officer at UM Worldwide, an Interpublic Group agency.
Plus, as she points out, data collection “is only one component of the far broader regulatory scrutiny and public discourse around privacy and responsible commercial data use.”
Instead, what’s needed, Garcia argues, is a foundational reset in how we think about responsible commercial data practices. “In a world where data is ubiquitous, any harm can be a data-enabled harm,” she says. “Without addressing the fundamental lapses in accountability, transparency and responsibility that fostered the erosion of trust and at the core of today’s signal loss, AI-powered solutions will face the same fate” as third-party cookies and mobile identifiers, she adds.
In fact, the industry is already seeing growing scrutiny around interest-based audience targeting – which has been a core focus of Google’s and Meta’s in light of signal loss – based on the fact that some interest-based signals might be used to predict sensitive personal information. As Garcia puts it: “Using behavioral data and other signals to algorithmically predict whether someone has a particular health condition, or as a proxy for ethnicity, still poses risk of harm, discrimination and exploitation. Without guardrails to prevent this, and absent transparency, AI would serve only to make the risk of harm more difficult to detect and prevent.”
But even skeptics like Garcia don’t dismiss AI’s potential for doing good in the way of privacy-preserving personalized advertising. She acknowledges that AI-driven solutions could help deliver more age-appropriate content to young users and even help inhibit potentially harmful or dangerous messaging to groups or individual users.
At the end of the day, “it’s about ensuring responsible development and responsible use of AI-powered ad targeting and measurement solutions,” she says. With those objectives in mind, she advises that the industry focus more of its attention on the end-user; she says we should move away from thinking in terms of ‘privacy-centric’ advertising and consider instead ‘people-centric’ advertising.
“Through this lens,” she says, “the ability for AI-driven solutions to enable relevant advertising while tempering individually-identifiable data collection, use and sharing can be a positive – but such solutions also need to contemplate the impact on individuals and address the potential for exploitation and harm such as manipulation, deception, enabling sensitive inferences and disparate impact.”
For more on the latest happenings in AI, web3 and other cutting-edge technologies, check out The Drum’s latest Deep Dive – AI to Web3: the Tech Takeover. And don’t forget to sign up for The Emerging Tech Briefing newsletter.