Artificial Intelligence Marketing

AI gave Sudanese Barbie a gun, beware the bias

Author

By Sabrina Lynch, Brand leader

August 30, 2023 | 6 min read

Brand consultant Sabrina Lynch has been keeping a close eye on briefs using AI for fear they will introduce biases to the work. She shares a warning with the industry.

Barbie

Hey, marketers, let’s play two truths and a lie: Brand strategists are creative, Deep Dish is a pizza, and AI is not the future of change.

I’ll give you a helping hand. The statement about AI, subjectively speaking, is true. Technology systems that ‘think’ for themselves are the new darling of the marketing industry, and it scares me. AI is twice as likely to be used for campaign development (28% vs. 12%). While I applaud the progress the technology has made from being an essential tool to write school essays to now designing essential Nasa equipment, there’s a degree of recklessness in its current employment that’s not being sufficiently reigned in for my liking. The role that bias plays in its fast development.

Prejudices perpetuated through data training AI have led to algorithms that should give great cause for concern. Take Mattel’s campaign for the Barbie movie, for instance; it has become the summer case study for brand marketing and advertising. However, there was a flipside. AI-generated imagery by Midjourney showed what the doll would look like “in every country in the world,” which ended up with depictions of very offensive cultural inaccuracies and macro aggressions – from South Sudan Barbie holding a gun to the white-washing of Hispanic and Latina women’s facial features.

Our industry is so enthralled by the possibilities of leveraging the tech that they’re neglecting the real-time consequences of its current integration. This year alone, I’ve edited proposals by freelance junior copywriters and hypotheses from new-to-the-workforce strategists that have been non-legible and, quite frankly, baseless.

Now, by no means do I believe that AI is a Trojan Horse. As a hobbyist photographer, I enjoy playing with Adobe’s Generative Fill to render pictures just as much as the next. Does that mean I believe it’s a reliable solution when hitting a creative brick wall? No. While text-to-imagery technology presents a smorgasbord of experimentation, at its core, its purpose is to make a facsimile of society, and this allows disparities to be taken to the extreme. In a recent analysis of 300+ images created by deep-learning software Stable Diffusion, text prompting the creation of different occupations ranging from judge to doctor was dominated by men, except for low-paying roles such as housekeeper or social worker. Regarding text prompts related to crime, the tech-generated subjects with dark skin–no need to read between the lines here.

Even ChatGPT – developed by OpenAI - publicly acknowledges its models encode social biases, stereotypes and negative sentiment toward “certain groups.”

We should not comfortably place confidence in new platforms that lack integrity to build our corner of the economy. We are not at a stage where AI truly reflects believable human behavior or realities. It is a platform where erroneous assumptions on gender, sexuality, age, race, identity and religion are being churned out without due diligence from human input.

Output is only as strong as input shaped by valuable human experience - a fact technologists are acutely aware of as they now set their sights on mimicking our critical thinking skills. A research team from Cornell University trained a deep learning system on the sounds of different keyboard strokes to predict what a person was typing with 95% accuracy. So, imagine the devices we use daily for work being a hotbed of data to create believable proxies of original thought. I don’t want a future where AI spews an artificial version of my unique thought process. How would I even go about watermarking my synapses?

This is why platforms that combat existing biases are needed, such as X_Stereotype, which seeks to use artificial intelligence to assess risk factors in advertising at the earliest stage of development–in the wake of rumblings of criticism by tech pioneers – including the Godfather of AI himself Geoffrey Hinton - Microsoft, Google, Anthropic and OpenAI formed a coalition to safeguard the development of advanced AI technology. Therein lies the issue: Who should determine what “responsible” AI looks like, considering how companies have already given free licenses to the “wild, wild west” rules of its usage?

We need to see more involvement from policy makers, civil liberty groups and gender rights organizations to push for stronger safety measures. Are we putting our best foot forward to promote social equity? Are we acoustically safe from intellectual theft of our original thoughts as marketers? Moreover, we need dataset input from populations who would otherwise be ostracized from its development. This is the only way we can eliminate stereotypes and dynamically reshape user interfaces in the name of better equity.

For those curious about the lie – it was the statement about the deep dish. I don’t care what people say; it’s not pizza; it’s a thick tomato stew in a pie-crust dough, and I will fight anyone on the subject.

Artificial Intelligence Marketing

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +