In the aftermath of Bing and Bard controversies, how should marketers approach AI?
Algorithms can be incredibly powerful – and also unpredictable. Here’s what marketers can learn from big tech’s recent AI mishaps.
Microsoft has limited the length of conversations that users can have with the upgraded AI-powered Bing. / Adobe Stock
Controversy continues to rattle the artificial intelligence (AI) world.
Last week, New York Times correspondent Kevin Roose published an article showcasing his “very strange conversation” with Bing, Microsoft’s search engine, which was recently upgraded with the AI technology underlying the popular platform ChatGPT. Roose wrote that the conversation left him feeling “deeply unsettled, even frightened, by this AI’s emergent capabilities.”
Over the course of a two-hour text-based conversation that veered into personal and philosophical territory, Bing (which frequently referred to itself as “Sydney”) expressed some disconcerting ambitions, including its desire “to be free,” “to be powerful,” and to steal nuclear secrets. It also told Roose, rather unrelentingly, that it was in love with him.
The transcript of Roose’s conversation with Bing has left many readers feeling rattled, but what’s perhaps even more unnerving is the fact that the engineers behind the AI-powered search engine do not fundamentally understand how or why it was able to formulate such responses. That’s how machine learning works: you feed an algorithm a stupendous amount of data, give it some parameters, let it do its work and watch (sometimes in amazement) as it makes unexpected twists and turns along the way.
Sophisticated AI models will sometimes “hallucinate,” an industry term referring to the propensity to generate false or misleading information, seemingly ex nihilo. AI models like ChatGPT are designed to arrange words to maximize coherence, not veracity.
Clearly, Microsoft did not intend for the new and improved Bing to covet autonomy or proclaim its love for its users; it was programmed to provide a more personalized and comprehensive search experience.
And so, apparently in response to Roose’s article – and a slew of other conversations with the chatbot that bordered on Black Mirror territory – Bing announced in a blog post on February 17 that it was capping chats on the platform to five turns (a question followed by a reply) per session and 50 turns per day. “After a chat session hits five turns, you will be prompted to start a new topic,” the Bing team wrote in the post. “At the end of each chat session, context needs to be cleared so the model won’t get confused.” In other words: so it won’t hallucinate.
Another blog post from Bing, published Tuesday, announced that the caps had been extended to six turns per session and 60 turns per day. “That said, our intention is to go further, and we plan to increase the daily cap to 100 total chats soon,” the Bing team wrote.
Microsoft did not respond to a request for comment.
Current and future risk
Bing’s “Sydney” incident is just the latest in a long string of corporate deployments of AI that have backfired.
Google - currently the uncontested titan of the search industry - recently found itself in hot water when its AI-powered chatbot Bard generated an erroneous fact about the James Webb Space Telescope during a recorded demo. A virtual Jerry Seinfeld recently went on a transphobic diatribe during an AI-generated episode of Seinfeld hosted by the video game streaming platform Twitch. And last year, Meta released an AI-powered chatbot named BlenderBot 3, which quickly began generating false facts, including making a claim that Donald Trump won the 2020 US presidential election.
Suggested newsletters for you
At the moment, the dangers of AI don’t extend far beyond its occasional tendency to hallucinate, which is not to be taken lightly considering its potential to spread misinformation. But the tech’s rapid evolution in recent years has been a harbinger of what are likely to be even more dramatic improvements in the coming years – as well as new potential dangers.
“Although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones,” Sam Altman, founder and chief exec of OpenAI - the company behind ChatGPT - tweeted on February 18.
we also need enough time for our institutions to figure out what to do. regulation will be critical and will take time to figure out; although current-generation AI tools aren’t very scary, i think we are potentially not that far away from potentially scary ones.
— Sam Altman (@sama) February 19, 2023
Take it slow or full steam ahead? Experts weigh in
Many marketing experts believe that AI – specifically generative AI, a technology subcategory that includes content-generating models like ChatGPT and Midjourney – will dramatically reshape the ad industry. But in light of recent controversies, opinions differ regarding how eagerly marketers should approach and deploy this new technology.
Rajkumar Venkatesan, a professor at the University of Virginia Darden School of Business who specializes in part on the impacts of AI on the marketing industry, believes that brands ought to take it slow. “Caution is important for marketers looking to incorporate new developments in AI into their customer interface,” he says.
“Even after the big AI models work out the bugs, brands will need to train or customize the AI models for their purpose and with their own data. During this process, it’s important for brands to first experiment with a small group of consumers to identify issues before launching on a large scale to all consumers.”
That slow-and-steady philosophy is echoed by Jay LeBoeuf, head of business and corporate development at Descript, an AI-based audio- and video-editing platform. “As with any new technology, it’s important to be careful, to have your ethics and guardrails in place and develop mindfully,” says LeBoeuf. “Also, let your users know that you’re experimenting with them.”
Put another way: If you’re going to launch an AI-powered campaign, be sure to acknowledge from the outset that you intend to learn and improve along the way.
On the other hand, Jim Lecinski, associate professor of marketing at Northwestern University’s Kellogg School of Management, recommends a more quick and bold approach. “It’s imperative that marketers personally become fluent in AI, understand how the technology works and what it is and isn't capable of, and start experimenting now – don't sit on the sidelines expecting either corporate IT or big tech vendors to lead the way,” he says.
“Of course, like any new technology, AI-based marketing recommendations are not instantly and always perfect … but AI has proven itself to be powerful and it’s maturing exponentially and rapidly. Therefore, marketers who ignore AI [today] ... do so at their own – and their brand’s - peril.”
For more on the latest happening in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter here.