2023 will be the year of AI accountability
With AI seeing growing adoption in the world of business – and across marketing and advertising functions in particular – emerging technologies will come under greater scrutiny this year when it comes to assessing algorithmic biases, data privacy and more, predicts Gartner’s Andrew Frank.
Artificial intelligence (AI) is taking on an ever-greater role in the lives of marketers. Technological breakthroughs, increasing privacy-related restrictions on data collection and economic pressures mean that a growing number of marketing teams are turning to AI to solve their problems. And it isn’t hard to see why – AI enables marketers to operate more efficiently, deliver content at scale and optimize campaigns to drive results.
However, there are growing concerns from regulators and advocacy groups about manipulative and biased uses of AI: developments such as the AI Act in the EU, or the AI Bill of Rights in the US mean that the use of AI by brands is attracting growing scrutiny. In recent years, several brands have come under fire over their use of advanced technology to influence consumers in creepy and inequitable ways.
In fact, Gartner anticipates that this year, over a dozen enterprises will come under fire in the media or legal proceedings for ethical lapses in their use of automation within marketing campaigns.
Among the latest round of Gartner’s marketing and advertising predictions for 2023 and beyond was the expectation that 70% of enterprise CMOs will identify accountability for the ethical use of AI among their top concerns by 2025. With this in mind, and AI becoming increasingly pervasive in marketing, could 2023 be the year that marketers sit up and take notice of its ethical implications?
Is AI a reputational risk?
In October 2022, a year after the EU released a draft of its proposed AI Act, the US issued its first ‘AI Bill of Rights,’ a set of nonbinding guidelines focusing on the ethical challenges exposed by the growing adoption of rapidly advancing AI technologies in society. Facing future regulatory scrutiny, plus calls from advocacy groups and growing consumer awareness of the issue, marketers must take notice of three key risks when using AI:
1. Algorithmic discrimination
Marketers must be aware of the dangers of automated systems having disparate negative impacts on people with protected characteristics. This routinely happens when models are trained on historical data that reflects prevalent social biases. Many tools use large data models trained on huge datasets, which eliminates the possibility of applying data hygiene practices to an organization’s own models. The question of who’s accountable when biases surface in marketers’ use of open tools is unresolved, but brands are likely to be held responsible.
2. Data privacy
While it’s hardly a new concern for marketers, the use of de-identified data in algorithmic training contexts introduces new complexities. This is particularly true when data comes from multiple sources, as in a collaborative data clean room scenario. AI inferences made from data that may seem innocuous or necessary, such as a device’s approximate location or language settings, highlight the difficulties of establishing whether information should be considered personal if it’s impossible to associate with an identifiable person.
3. Notice and explanation
It’s one thing to disclose that an automated system, like a chatbot, is in use on a website. It’s another to provide a clear description of how the system works or how a user’s data might impact its function. When a customer journey is interrupted by a simplified explanation of a complex algorithm, a consumer is likely to agree to its use without a clear understanding of its implications. While this seems to provide marketing with some cover when challenged on the ethics of its algorithms, marketers must be cautious. When ethical risks become apparent, check-box consent won’t inoculate a brand from accusations of unfairness.
Building an ethical approach to AI
While other departments may have a clearer line of sight to the technical risks of AI, marketing is uniquely positioned to understand the customer and experience design perspectives and the risks to a brand's reputation.
Marketing is also more likely to draw on tools and datasets developed outside its organization that may evade internal oversight. This places the onus firmly on marketers to understand and address the ethical issues that AI is raising. In 2023, AI accountability will come to the forefront. Here’s how marketers can take steps toward a more ethical approach:
Look beyond privacy. AI ethics in marketing has to do with avoiding manipulation and bias rather than just securing consent. Beware of unintended consequences that arise when algorithms are trained to maximize commercial goals above all else, and include wellness as well as diversity, equity and inclusion goals in training and evaluation policies.
Assure that marketing personnel, both internal and external, are fully versed in the relevant principles of data ethics by establishing training and certification programs. Screen all external platforms, processes and datasets for ethical vulnerabilities as part of the evaluation process, before deciding whether or not to adopt them.
Deploy disclosure policies that allow customers to drill deeply into explanatory text on demand but don’t build unnecessary roadblocks. Use opt-outs as defaults and correctly label.
There are risks involved, but AI’s ability to improve marketing’s effectiveness and lower costs is making it virtually irresistible for marketing teams. The superior experiences it can offer will attract customers who will remain largely unaware of its potential for harm – until it’s publicly exposed.
In 2023, we expect to see a growing number of brands in the news for unethical use of AI, so now is the time for marketers to be proactive in taking accountability for its use.
Andrew Frank is vice president and distinguished analyst at Gartner.