AI can mitigate bias in advertising, per new IBM/Ad Council research

Bias – unconscious or otherwise – has always had a tendency to seep into advertisements, exerting a subtle pressure on who gets to see an ad and who doesn’t. New research produced by IBM Watson Advertising, which leveraged data from The Ad Council, provides a glimpse into the potential role that artificial intelligence (AI) could play in helping brands to reduce bias in their ads.

AI can effectively detect, and possibly even reduce, bias in advertising, per new research from IBM. The report reflects a spreading awareness, both within the advertising industry and throughout the broader culture as a whole, of the pernicious and often subtle ways in which prejudice can insinuate itself into our public discourse.

The research was conducted over a six-month period in collaboration between IBM Watson and the Ad Council. Together they took a close look at a PSA from the Ad Council called ‘It’s up to you,’ which ran throughout 2021 and was aimed at reducing Covid-19 vaccine hesitancy. (The Ad Council is known for its iconic PSAs and characters including Love has No Labels, Smokey the Bear and McGruff the Crime Dog.) The research team deployed IBM’s software to determine whether certain groups ‘were either systemically advantaged or disadvantaged.’ That is, whether they were more or less likely to be exposed to the ads that were being scrutinized.

The primary goal, in other words, was to identify and measure biases. The team also aimed to assess the impact of signals (in other words, ‘the context in which an advertisement is delivered’) upon biases in ads, as well as the role that AI could potentially play in helping to mitigate bias in advertising.

The research was announced today in partnership with The American Association of Advertising Agencies, or 4As.

The findings confirmed that biases against certain groups of people is, in fact, a very real problem. Often, the biases that are propagated via advertisements are completely unintentional and unknown to their human designers (‘unintended biases’) — the products of unconscious decisions made by an algorithm that can lead to the disproportionate targeting (or ignoring) of a certain group.

The research also found that ad-targeting algorithms have become a bit more sophisticated and precise than was previously realized. Advertisers may set out to target, say, a particular age group, but the algorithm may end up – for any number of complex reasons – going a step further and targeting a subset of that population (those with a high school-level education, for example). “The algorithms can actually see other components ... that maybe we’re not asking it to look at,” says Robert Redmond, head of AI ad product design at IBM. “We can see when that happens that there could be the potential for us to optimize towards a group that we weren’t actually identifying. [We’re trying to] understand what’s happening underneath the algorithm, how it’s processing the other signals of information that it sees, and trying to find a way to translate that to something that we can decipher.”

The IBM research, as Redmond points out, is “step one.” Evidence of systemic bias has been detected. Now brands are faced with the much more daunting task of essentially reverse-engineering their marketing strategies to detect how and where those biases are actually seeping into their systems. “There’s an industry-wide effort to really unpack the concept of: ‘We’re seeing this in the data, this is the outcome, this is the translation for the specific campaign, and this might back into some cognitive biases somewhere,’” says Redmond. “It’s really difficult to make those relationships from the reverse. It’s a lot easier to stand in a room with other humans and see those types of cognitive biases happen in real time than it is to start with the data, look backwards and decipher what caused where we are, especially in something as complex as the digital advertising ecosystem.”

For more, sign up for The Drum’s daily US newsletter here.