As Musk & others call for AI pause, the perils of rushing development are clear
Following the Wednesday publication of an open letter calling for a pause on AI development over “risks to society” that was signed by Elon Musk and several AI experts, Crenshaw Communications‘ Chris Harihar argues for more careful, ethics-focused approaches to development.
/ Adobe Stock
The ad industry is, for good reason, abuzz with excitement over advances in artificial intelligence (AI) and machine learning (ML), and is eager to utilize them for ad targeting, measurement and streamlining. However, the rapid development of these technologies raises key ethical questions.
Yesterday, over 1,000 AI leaders, including Elon Musk, signed an open letter urging AI labs to pause training AI systems more powerful than GPT-4 for the next six months. This pause is meant to address concerns about AI’s potential “risks to society,” the letter noted. Signatories have asked labs to develop shared safety protocols for advanced AI, overseen by independent experts. This underscores the growing concern of ‘impatient intelligence,’ a term I’ve coined in discussing AI and ML with adtech clients.
AI can transform businesses, offering insights, automation and decision-making capabilities previously relegated to the realm of science fiction. ChatGPT’s rapid rise has brought these benefits to the mainstream, making AI's potential increasingly tangible.
However, the hype has sparked an AI arms race, leading firms to rapidly develop and deploy algorithmic models without sufficient attention to quality, reliability or ethics. I call it impatient intelligence because it prioritizes short-term gains over long-term sustainability and responsible innovation, potentially harming the advertising industry and beyond.
We’re seeing impatient intelligence manifest in three key ways.
1. Poor quality experiences
One of the most significant issues arising from impatient intelligence is the creation of suboptimal experiences for end-users, whether they are consumers or businesses.
Microsoft’s ChatGPT integration into Bing created a chatbot that attracted over a million users in 48 hours. Google’s response, Bard, had a poor demo, impacting share price. Bing’s program, meanwhile, Bing Chat or ‘Sydney,’ faced criticism for its unhinged responses, which included threats, manipulation and gaslighting. This chatbot‘s unpreparedness for public use negatively affected the perception of AI and ML more broadly.
On the B2B side, Intercom’s director of machine learning discussed rolling out ChatGPT to optimize customer support experiences in a recent article for Computerworld, noting that “early tests showed that just putting in a fully GPT-powered bot naively in front of customers is a bad idea.” He added: “If you ask them for an answer to a question and they don’t have that answer, they will frequently make something up.”
For advertisers, these examples should serve as a warning. Many of these services simply aren't ready for prime time when integrated at scale. Relying on impatient intelligence for campaigns – whether it be targeting or creative – can negatively impact brand loyalty and have unintended consequences.
2. Unethical AI and ML that could be harmful
AI and ML models not thoroughly stress-tested risk being used unethically, whether intentionally or unintentionally, causing harm to underrepresented communities or influencing bad actors.
Misguided models may be racist, classist, sexist or discriminatory. ChatGPT has displayed biases, according to a handful of reports, as have robots backed by certain AI algorithms when selecting photos based on descriptors. These models damage marginalized communities, worsen inequalities and perpetuate biases.
Advertising has seen these issues in the past. The Brookings Institution‘s Artificial Intelligence and Emerging Technology Initiative found Facebook‘s ad-targeting algorithms biased against minority users in an evaluation from 2019. Google‘s ad targeting, meanwhile, was found to exhibit racial bias in a Harvard study from 2013.
More than ever, advertisers must prioritize ethics during AI and ML development and deployment, as biases harm target audiences and the industry.
Suggested newsletters for you
3. Disregard for ML engineers and developers
Impatient intelligence pressures AI and ML engineers with unrealistic timelines. Amid layoffs and reduced resources, big tech rushes innovations, forcing developers to do more with less, resulting in poor user experiences and ethically questionable AI.
It‘s a well-documented issue: a ClearML report from this year revealed that ML developers felt burnt out even before layoffs, and a talent shortage challenges ML's large-scale implementation.
Last year‘s Disney-Marvel visual effects saga shone a spotlight on similarly poor working conditions in entertainment – but AI and ML carry broader, more serious consequences. And an unsustainable approach to work only hinders responsible innovation.
Advertisers must recognize the risks of hastily-developed AI and collaborate with companies that are prioritizing only the ethical development of these programs.
So, what can be done?
To address these issues, advertisers must work with tech partners that adopt a more patient and ethics-first approach. They should prioritize quality, accuracy and reliability in the development and deployment of AI and ML models.
This necessitates providing ML engineers and developers with the time, resources and support needed to create high-quality models. Firms should also conduct thorough stress tests and consider the potential ethical implications of their models before releasing them to the public – even in beta form.
Advertisers should also support public awareness about the use of AI and ML. Transparency is key to building trust in these technologies and ensuring their responsible development and deployment. This requires greater public engagement and participation in the development and deployment of AI and ML models.
Finally, regulators can support the push to ensure AI and ML models are developed and deployed responsibly. This means establishing ethical frameworks that protect users and prevent the development of harmful models. The AI Bill of Rights, released by the White House last year, is a good place to start.
Impatient intelligence poses a real threat. But advertisers can harness the potential of AI and ML, while mitigating risks associated with hasty development.
Chris Harihar is partner at Crenshaw Communications.