Attention marketers: Google’s $100bn Bard blunder underscores current dangers of AI
The chatbot generated an incorrect fact about the James Webb Space Telescope in its very first public demo. The incident has dramatically highlighted one of the most pertinent dangers for marketers using AI: it doesn’t always tell the truth.
Google publicly introduced Bard, its new AI-powered chatbot, in a blog post earlier this week. / Adobe Stock
Bard, Google’s new AI-powered chatbot, has told the wrong tale - and now the company is paying a heavy price.
On Tuesday, Google tweeted a gif of what a typical user experience might look like using Bard. The gif shows the chatbot responding to the question: “What new discoveries from the James Webb Space Telescope can I tell my nine-year-old about?” The AI system responds almost immediately with a neat, bulleted list of statements that are almost all correct, save for the final “fact”, which claims that the “JWST took the very first pictures of a planet outside of our own solar system.”
Bard is an experimental conversational AI service, powered by LaMDA. Built using our large language models and drawing on information from the web, it’s a launchpad for curiosity and can help simplify complex topics → https://t.co/fSp531xKy3 pic.twitter.com/JecHXVmt8l
— Google (@Google) February 6, 2023
In fact, as astronomers were quick to point out on Twitter, such planets (called exoplanets) were first photographed long before the JWSP was launched in 2021. According to Nasa, the first exoplanet was first photographed in 2004 by the European Southern Observatory’s Very Large Telescope.
Reuters and New Scientist were the first to report on the astronomers’ corrective statements.
The blunder has been a sobering reminder for Google, and others, that test-and-learn mistakes can be expensive. The market value of Alphabet, Google’s parent company, dropped by around $100bn following Bard’s factual error.
It’s also the latest reminder that AI algorithms are far from perfect, and that they can often generate biased or inaccurate content veiled by overtones of certitude. Large language models like LaMDA and GPT-3 (the systems underlying Bard and ChatGPT, respectively) are designed to scour vast databases of information and generate content using the logic of probability, not that of verified truth.
News outlet CNET recently faced a backlash after it was discovered that an AI model that it had deployed to share news stories was sharing inaccurate and plagiarized content. Earlier this week, the video game streaming platform Twitch had to remove an AI-generated virtual Seinfeld episode after the system spewed a transphobic comment.
At least one brand appears to have taken note of the spate of recent AI-related PR crises: last week, Avocados From Mexico announced that it's scrapping its plan to integrate ChatGPT into its upcoming Super Bowl LVII campaign.
Google, meanwhile, is doubling down on guardrail mechanisms in the aftermath of the demo debacle. “This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester program,” a Google spokesperson told The Drum. “We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.”
The news of Alphabet’s precipitous stock price plunge arrives in the midst of an escalating struggle for AI supremacy between Silicon Valley giants Google and Microsoft. Google’s leadership reportedly issued a “code red” in December in the wake of the release of ChatGPT, a text-generating AI model launched the previous month by the Microsoft-backed tech startup OpenAI. Earlier this week, Microsoft unveiled a suite of new AI-powered features for Bing (its search engine) and Edge (its browsing platform), aimed largely at boosting the company’s competitive clout in an internet ecosystem dominated overwhelmingly by Google.
Google chief executive officer Sundar Pichai introduced Bard to a general audience in a company blog post published Monday. “We’ve been working on an experimental conversational AI service, powered by LaMDA, that we’re calling Bard,” the post reads. “And today, we’re taking another step forward by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks.”
Marketers would do well to pay attention to the trials and errors of major tech companies as they continue to deploy AI in novel ways, but not be deterred from experimenting, according to Mansoor Basha, chief technology officer at Stagwell Marketing Cloud. “As Big Tech fights for dominance in the generative AI space ... marketers cannot watch and wait for a winner to emerge,” he says. “Instead, we should be embracing experimentation – test with all, see what works, see what breaks – so that we are well-positioned with better actionable use cases once the dust settles.”
For more on the latest happening in AI, web3 and other cutting-edge technologies, sign up for The Drum’s weekly Emerging Tech Briefing here.