Fixing machines and not their makers: why AI bias is a human problem
For The Drum’s deep dive into AI and web3, Lauren Aguilera Brown of agency Distillery argues that brands ignore the biases in generative AI programs at their peril.
Bias in AI: an all-too human problem, says agency Distillery / Михаил Секацкий via Unsplash
Robots don’t care about your feelings.
ChatGPT was released in late November 2022 and amassed a $10bn Microsoft deal and over 100 million users within two months. Although the first successful AI program was written over 70 years ago, this new consumer-facing tech has opened a Pandora’s box of debates around ethics and the role of AI in our everyday lives.
At Distillery, our ears perked up when ChatGPT entered the scene. As a content company, we’ve been writing about AI for our clients for some time, whether by showcasing how it helps improve customer experiences or highlighting its ability to assess insurance risk in real-time. But this presented another opportunity to push creative boundaries.
We’ve tested the program’s usefulness in writing newsletters and interview questions, suggesting blog ideas, rewriting copy, summarizing research, and writing wedding vows (kidding… maybe). We even created a children’s storybook from scratch for Chinese New Year, using five different AI tools (ChatGPT, MidJourney, Murbert, Narakeet, and our own, Voundry AI).
This project gave us an opportunity to test and learn from AI, creating in two days what would normally take four-to-six weeks. What we discovered was that AI is a great tool for time-saving, brainstorming, and fun – but it brings its own pitfalls.
The biggest one: bias.
Cognitive biases: our big dumb brains
Psychologists have identified over 180 known cognitive biases – our attempts to tidy a messy world in our minds. We favor those in ‘our own’ group, for example, and if a conclusion supports our existing beliefs, we’ll rationalize to support it. These biases dictate how we vote and spend money, who we befriend, where we work, and what medicine we take. Most of the time, we’re not even aware of them, making it inevitable that the systems we build will incorporate our own worldview.
A 60-second search will submerge you in an ocean of content agencies, YouTubers, and influencers all asking the same things: Will AI make me obsolete? Are robots stealing our jobs? Is ChatGPT sexist/racist/homophobic? (The answers: Maybe. No. And no, not in the way you think it is.) AI, like a child, is a young creation that can only know what it’s told. Monkey see, monkey do. It parrots what it hears and mimics what it sees.
It’s the same with building an algorithm. Designers unknowingly introduce their own biases into the model, or into training data which ‘teaches’ the algorithm how to behave. They may train the AI on an incomplete dataset, creating ‘algorithmic bias’.
Unsurprisingly, bias creeps in at several design stages. The way we frame problems, the way we collect data, and how the data is prepared can all reflect or exacerbate existing prejudices.
The elephant in the room: us
When we consider AI’s relationship with hate speech, sexism, racism, political corruption, or anything else, we have to remember that AI is not trained to be polite, considerate, or respectful. It’s simply not able to value life or defend the vulnerable. Nor is it trained to contextualize data sets amid the whole of human history, and every possible perspective of a single issue. It is not a person. It is trained to do exactly what we teach it to do, with the information we believe to be correct and important.
Our Chinese New Year film, for instance, brought Chinese zodiac symbols to life. AI can only answer what you ask it, so to avoid creating a boilerplate Chinese New Year story (or worse, an exaggerated one), we didn’t instruct the tools to make the storybook for us.
“Our visuals were all tailored according to what we put into the AI – and we focused on story first, regardless of the culture it reflected,” says Gui Libby, Distillery’s creative director in Singapore. “We could have chosen the expected red imagery and more Asian-influenced design, but we chose a design and a voice that could cut through any demographic and make the celebration exciting and accessible to more people.”
Suggested newsletters for you
Studio D: battling bias, not its symptoms
In the world of content, it’s common to lean on a core set of creatives – dependable individuals who will come through. But going back to the same people, again and again, creates a limited range of diversity in our work – and, if left untreated, a major blind spot for us and our clients. Instead, we’re identifying our own biases and broadening our horizons to include the incredible talent we know is out there.
That’s why we’re growing Studio D, a global curated network of diverse creatives, to help raise under-represented voices in narratives for better creative, connection and impact.
This is not a tick-box exercise. It’s not a corporate mandate on diversity in the workplace. It’s our way of stripping things to the floorboards and asking, What makes a person valuable? What is the most important thing about them? A person’s depth of character, curiosity, and creativity is not calculable by the labels they wear.
That holistic view is something that we’ll keep weaving into the AI platforms that we use. We’re excited to innovate with AI as a tool to improve our best practices, challenge our own biases and assumptions, and make content that is unexpected, innovative, and meaningful.
Whether you see AI as just a trending topic, a technological advancement, a shortcut, a tool, or a threat, ultimately, it’s a mirror. We’ll see the best and worst parts of ourselves in what we build, and it will only ever show us what we show it.
For more hot takes and cold hard looks at the emerging tech landscape, check out The Drum’s deep dive on AI to web3.
Content by The Drum Network member:
We’re a global, independent content company with a difference: everything we do is guided by positive impact.Find out more