Today’s youth will save brands – and humanity – from the bias of AI
From speedy, AI-gleaned data driving real-time campaigns, to advertising on an AI-dependent platform like Amazon’s Alexa, whether we realise it or not, AI’s place at the marketing table is already set.
In a 2018 global survey by Forrester, 86% of marketers thought AI would make marketing teams more efficient effective, and 82% believed it would reinvent the way they work.
Why? Because AI has the potential to do more than re-order our favourite groceries and recommend playlists. It will be able to predict the performance of pension funds, coordinate disaster efforts and make life or death medical decisions.
But we’re not quite there yet – and herein lies the danger. Brands are so blinded by future possibilities that they are skipping over essential groundwork. Where AI is really working is with single objective chatbots, like Sephora’s, that helps recommend make-up based on data customers’ input. Simple and effective. It’s working well in making brand experience consistent across channels.
Where it’s not working is with products that have non-representative or insufficient datasets. One US tech company used a 77% male, 83% white dataset and recorded a 97% accuracy rate. What happens when that product hits the market? Maybe it ends up like Google’s 2015 photo software, which tagged two black users ‘gorillas’ due to underrepresentation in the data, or the infamous Tay chatbot, which had to be taken down after 16 hours after she became the bigoted uncle at the Microsoft family party.
Another classic blunder from Youth Laboratories in 2016 saw an AI beauty contest choose nearly all white winners. It was supposed to remove human bias, but here’s the thing: the output is only ever as good as the input and the input is nearly always way too white.
A racist twitter bot or a disappointing beauty contest are one thing. But brands will run into real problems are when they are held responsible for outcomes that have real global impact. After mounting pressure, Mark Zuckerberg has taken responsibility (albeit lightly) for the Facebook News algorithm’s potential impact on the 2016 US elections, vilifying Facebook further at a time where younger users are leaving in droves.
This is why we can’t be forgiving of brands that rush into using flawed technology. Because for brands the consequences may be embarrassing, but for people affected there are far greater implications in terms of health, financial stability, civil liberties and even life itself. We know politicians will only legislate once the worst has happened, so we need someone to be responsible now. Guess what? Brands, it’s you.
If brands buy technology sold by private companies they need to know how it works and what data was used to train it, or externally test it themselves.
Generally speaking, I think we would have a better idea of what to do if AI suddenly turned and tried to obliterate us, than what to do about the slow seep of racial, gender and socio-economic bias that it seems to be proliferating.
But, actually, there is a new breed of AI bias guardians and – hallelujah – some of those leading the charge are young, female or of minority groups. Enter all round over achiever and MIT researcher Joy Buolamwini who leads the Algorithmic Justice League to fight coded bias. Her research explores the intersection of social impact technology and inclusion. Another pioneer to watch is Timnit Gebru, who joined Microsoft’s Fate team (Fairness, Accountability, Transparency and Ethics in AI) to find and address biases in AI data.
This may be where the answer lies. Youth and technology are inextricably linked. The youth are the early adopters – the ones who decide if your brand is going to sink or swim and the most vocal champions of progress.
Recent research by student marketing agency, Seed, also shows that this group is the most ethical, liberal and environmentally minded generation to date. They don’t see grey areas; immoral behaviour is deemed just as bad as illegal and they are determined that brands should be held accountable. This means the potential for brand PR disasters is scarily big. But it also means there is a group of people already primed to be the moral and ethical guardians of our new tech.
So, although it’s ironic that things like facial recognition and voice assistants (which have been shown to exhibit massive bias) are most likely to be embraced by younger audiences, they might be the very ones to teach the robots to be human.
After all, it is easier to remove bias from machines than people.
Krupali Cescau is head of planning at brand experience agency, Amplify