Artificial Intelligence AI ChatGPT

Reflecting on one year of ChatGPT: how has the world been changed?

Author

By Webb Wright, NY Reporter

November 30, 2023 | 15 min read

Exactly one year ago, ChatGPT made its public debut. It has since sent a shockwave across popular culture and helped to spark a political dialogue about the potential risks and benefits of AI.

Welcome to ChatGPT

Released to the public on November 30, 2022, ChatGPT garnered its first million users in just five days. / Adobe Stock

There are certain chapters of history that come to be regarded as phase transitions for humanity, moments in which cultural evolution and the course of our individual, day-to-day lives shift in entirely new directions – for better or for worse. We tend to call these historical shifts “revolutions”; think of the Agricultural Revolution, the Industrial Revolution, the Digital Revolution.

We are currently, as you may have heard, living through what many have begun to call the Artificial Intelligence (AI) Revolution.

Though rudimentary forms of AI have been around for decades (you interact with it every time you search for a route in Google Maps or receive a recommendation in your Instagram feed, for example), the past couple of years have witnessed an enormous leap in the technology’s capabilities. In what feels like the blink of an eye, advanced AI has emerged from the realm of science fiction and into reality; the notion of a superintelligent algorithm capable of exterminating humanity is no longer just the stuff of novels and films – it’s a subject that’s being seriously discussed by tech moguls and government officials.

The release of ChatGPT one year ago to the day could well be remembered as the event that caused the wave of AI fervor that’s now washing across the world to escalate from a slow trickle into a flood.

‘We’re suddenly in a moment where it’s time to rethink everything’

Built upon OpenAI’s proprietary large language models (LLMs) and leveraging natural language processing, ChatGPT shocked many users in the days and weeks following its launch with its capacity to provide complex and coherent responses to text-based prompts. Trained on a vast corpus of data gleaned from the internet, the platform can mimic the style of virtually any well-known human author, provide detailed explanations for just about every subject that one could name – from quantum entanglement to the origins of the Peloponnesian War – and perform a litany of other linguistic tricks. It can also write code, making it a valuable tool for software developers.

The public debut of ChatGPT showed laypeople just how powerful AI – and specifically LLMs – could really be. The platform’s abilities, along with those of other LLM-powered chatbots such as Google’s Bard, are so sophisticated that they’ve sparked a serious ethical debate surrounding the hypothetical sentience of these systems; a Google engineer was famously fired last year for publicly claiming that LaMDA, the LLM behind Bard, was conscious and capable of experiencing emotion. (Ask ChatGPT if it’s conscious and it will confidently declare that it isn’t: “My responses are generated algorithmically and do not involve subjective experience or awareness.”)

ChatGPT has transformed the very grammar of our culture. ‘Generative AI’ has become a household phrase, spawning even buzzier abbreviations such as ‘gen AI’ and ‘GAI.’ The Cambridge Dictionary 2023 Word of the Year was ‘hallucinate,’ a nod to the tendency for LLMs to occasionally assert false information. Republican presidential nominee Chris Christie recently zinged his competitor Vivek Ramaswamy during a debate by comparing him to ChatGPT, and the tool was the subject (and the co-writer) of a recent episode of South Park.

“OpenAI, primarily through ChatGPT and their GPT series of LLMS, have really transformed the world,” says Henry Ajder, an expert in generative AI and the technology’s impacts within the global information ecosystem. “The large language model landscape looks the way it does, in my view, because of OpenAI and because of ChatGPT … [the platform] has made a massive mark on the general societal consciousness that I think is pretty incredible.”

Businesses have been rushing to capitalize on the generative AI Gold Rush set in motion largely by the huge amount of public attention that was captured by ChatGPT. Companies as far afield as Coca-Cola, Volkswagen and Under Armour have been quick to leverage the technology following the platform’s release.

While many across the advertising industry have begun to embrace ChatGPT and other AI tools for their capacity to quickly generate content, handle monotonous tasks and personalize marketing efforts, others have begun to voice concerns about the tech’s potential to push some human marketers out of their roles.

“This is the cascading effect of ChatGPT’s introduction to the world,” says Mike Creighton, executive director of experience innovation at marketing agency Instrument. “We’re suddenly in a moment where it’s time to rethink everything.”

Like Creighton, many other marketing experts insist that the public debut of ChatGPT this time last year ushered in a profoundly transformative moment in history – both for their industry and for society as a whole.

“I think 2023 will be one of those moments we’ll look back on and say, ‘That was the year it all changed,’” says Matthew Candy, global managing partner of generative AI at IBM. “Generative AI has undeniably captivated the world’s collective attention and sparked fires of imagination … More consumers can [now] experience AI firsthand, giving them a richer understanding of what these technologies can do when paired with a simple and engaging user experience, similar to the transformative shift that happened during the rise of the internet and mobile technologies.”

A year of accelerated change

Looking back to this time last year, and given the enormous amount of sensationalist press coverage about AI that we’ve seen during the time since, the sudden arrival of ChatGPT feels a bit like the mysterious appearance of the monolith at the beginning of 2001: A Space Odyssey. There we were, tinkering with our quaint AI-powered tools – Siri, Alexa and so forth – and suddenly, we were presented with something that seems categorically different, something that almost seems like an alien technology that nobody, not even the people who had built it, fully understood.

But the fact is that ChatGPT was launched not with a bang but with a whisper. There was little fanfare within OpenAI to mark the occasion; many employees who hadn’t worked directly on the product’s development didn’t even know about the launch until after it had occurred, according to reporting from The Atlantic. Few could have predicted that it would become the fastest-growing app in history, attracting its first million users within just five days. (That record would be shattered the following summer with the launch of Threads, Meta’s social messaging app, which reportedly reached its first 100 million users in the first five days following its launch.)

The meteoric rise of ChatGPT was followed by a growing public awareness of its shortcomings. As previously mentioned, LLMs have a tendency to hallucinate, which can lead to the spread of misinformation and embarrassing (and sometimes costly) business blunders. But with each new iteration of GPT, the system becomes more and more intelligent, and less and less likely to hallucinate. (GPT-5 could be released as soon as next year, though the company has not announced a formal launch date.)

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

Suddenly flush with international fame, OpenAI quickly went on to release a flurry of additional products and features. The company launched APIs for ChatGPT in March – enabling businesses to earn money in their own customer-facing integrations of the platform – followed later that month by the release of GPT-4, the successor to GPT-3.5. In September, ChatGPT was given a voice, along with the ability to analyze images. At its first developer conference (DevDay) earlier this month in San Francisco, OpenAI unveiled customizable versions of ChatGPT, called GPTs.

A year of huge success at OpenAI culminated earlier this month with the firing of OpenAI’s co-founder and CEO Sam Altman, followed five days later by his reinstatement and the resignation of most of the board members who had originally pushed to oust him. The episode has raised some thorny questions about the company’s mission and also about the future of the AI industry as a whole.

OpenAI has a highly unusual governance structure – one, it could argued, that made its recent turmoil inevitable. It was founded as a nonprofit in 2015 as a counterbalance and a corrective to the profit-driven incentives that were beginning to kindle throughout the then-burgeoning AI industry. The founding charter of OpenAI is to develop artificial general intelligence, or AGI – automated systems that can equal or outmatch human performance in most tasks – safely, in a manner that directly benefits humanity, not just corporations and their shareholders.

“Artificial general intelligence has the potential to benefit nearly every aspect of our lives – so it must be developed and deployed responsibly,” the company says on its website.

In 2019, the company launched a for-profit arm with the goal of raising money from investors and launching commercial products. The move created a rift within the company: one of its co-founders, Elon Musk, jumped ship and became a vocal critic of OpenAI’s transition to a “capped-profit” company that received a huge amount of financial backing from Microsoft – exactly the kind of tech behemoth that OpenAI was originally intended to contrast with.

OpenAI is governed by a board whose charter, at least in theory, is to ensure that the company does not veer too far off course from its original mission. The board’s authority was exercised dramatically – if not with perfect political grace – on November 17 when it fired Altman without much pretext beyond stating vaguely in a press release that he hadn’t been “consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

The sudden firing led to outrage across Silicon Valley and a near-mutiny within OpenAI; the majority of the company’s employees signed an open letter threatening to quit and join Microsoft, where Altman had been offered a position leading a new AI project, if he was not reinstated as CEO and if the board members who had pushed him out did not resign. Eventually, most of those board members bowed to the overwhelming public pressure and left their posts, replaced by individuals who could be less inimical to rapid productization and growth within the company.

The recent controversy within OpenAI has accentuated a growing divide within the AI industry, with zealous optimists pushing for rapid growth on one side and cautious safety-mongers on the other. Some have interpreted Altman’s reinstatement and the restructuring of the company’s board as evidence that the forces of the market were, in the end, too powerful for OpenAI’s founding charge to oppose: “AI belongs to the capitalists now,” one recent New York Times headline bluntly put it.

Some believe the episode will ultimately benefit society: “It’s good that this is happening, because it forces a discussion about the competitive environment of the AI landscape versus the responsibility behind it,” says PJ Pereira, chief creative officer at marketing agency Pereira O’Dell.

Despite the fact that he's authored a novel in which a villainous rogue AI escapes human control, Pereira considers himself to be an AI optimist. But he says that the past year has shown the world just how rapidly technology will evolve – and how quickly society will be forced to adapt – in the new technological age that’s taking shape around AI. “The things that we [were worried about] one year ago are absolutely normal right now,” he says, invoking the example of the recent writer’s strike, which focused in part on the growing presence of generative AI in the film industry, “which raises the possibility that things which we [now] believe to be impossible may be just one year away.”

Then again, it’s possible that AI will fail to live up to the hype, and people will look back on 2023, with all of its sensationalist headlines about the utopian and dystopian potential of AI, and shake their heads in amazement that humanity could have been so wrong. But at least at the time of this writing, it seems very probable that the pace of development within the AI industry will only continue to quicken, and that AGI will ultimately be achieved. (Many AI experts agree that AGI is inevitable, though it's estimated time of arrival is hotly contested.)

Given the speed at which LLMs have been evolving in recent years, driven in large part by huge investments from major tech companies like Google and Microsoft, it’s difficult to predict just how much more advanced they’ll be in just a few years compared with what’s currently available. It seems likely, however, that someone using the 2030 model of ChatGPT will look back on the current version in a manner that’s analogous to the way that we look back on Windows 95.

For now, all we can say is that we're living in a very different world than the one we were in one year ago – and this is just the beginning.

For more on the latest happenings in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.

Artificial Intelligence AI ChatGPT

More from Artificial Intelligence

View all

Trending

Industry insights

View all
Add your own content +