We can’t trust ChatGPT to become the next Google
AI technology may have given creatives and marketers a leg-up, but ‘search engine’ may be a tad ambitious, writes Neil Goddard of agency Tug.
Could the evolution of AI technology be detrimental to search engines? / Annie Spratt
Like many SEOs, I’ve been thinking a lot about ChatGPT. The reason being? The immense potential it carries – and the ways we need to manage this safely.
There’s no denying that ChatGPT, and other artificial intelligence (AI) chatbots like it, offer up some very exciting and wide-ranging capabilities. Need help writing an essay or solving a coding problem? ChatGPT has answers. But whether the outcome is accurate or not is a different story.
Quantity ≠ quality
Without a doubt, the technology already holds the potential to streamline many time-consuming tasks across a range of disciplines. At Tug, we’ve started using it to support internal tasks like brief creation and scaling keyword categorization. Meanwhile, some have gone as far as to say they’re using the tool over search engines.
While it is certainly one of the shiniest new additions to the tech industry’s toy box, whether we can trust it with the pursuit of knowledge is another matter entirely. AI chatbots like ChatGPT, Google Bard and Jasper are taught on a wealth of sources and offer an impressive array of skills because of this. However, when it comes to information, the quantity of sources doesn’t necessarily equal quality.
ChatGPT has been built on a database of content, not all of which is reliable, and not all of which may have been publicly shared. That which has been shared includes books, articles and Wikipedia.
Subject to scrutiny
Other models use sources we know even less about. Google Bard, for example, ‘draws on information from the web to provide fresh, high-quality responses’. What we do know is that Google’s LaMDA language model powers Bard and was pre-trained on a dataset comprising 50% dialogs data from public forums, and 12.5% coming from Wikipedia.
Being a murky source for factual information doesn’t negate this tech’s enormous potential or its existing capabilities. However, the fact remains: people are already starting to use these chatbots to answer queries. Of all the potential the technology has, I don’t think it’s reached that of a search engine quite yet.
Granted, the answers that current search engines provide aren’t always reliable. But here, users have more autonomy over the conclusions they draw from these answers. When typing a question into Google Search, you get shown the familiar list of internet sites that best answer your query. These sites are attached to authors, companies and institutions, which enable users to decide for themselves whether or not to trust the information.
While entering your query in ChatGPT generates an impressive, human-like answer, it does not provide the sources this answer is based on.
Suffice to say: if someone told you to stop using Google Search today, you’d be quite stuck. Beside the sourcing issue, you can’t shop using an AI chatbot, or look up directions.
These two systems are undeniably different in their current form. While search engine answers should by no means be taken as gospel, the space they give users to draw their own conclusions to results makes for a more reliable response to that of ChatGPT.
If chatbots were ever to fill the gaps where it’s currently lacking, and integrate with search engine capabilities, the ChatGPT v Google Search debate becomes much more interesting.
Search engine giants have recently claimed they’ll be integrating their traditional search features with tech similar to these AI chatbots. Google’s new AI feature, for example, will provide this newly familiar, AI-generated response to queries above its list of top-ranked sites.
Arguably, integrations like this could bring the best of both worlds and if executed seamlessly, could challenge the search engine as we know it today. But again: can we trust this evolution?
Long way to go
By their very nature, the human-like responses AI models provide lack referencing. Even with Google’s new AI feature, the answers don't appear to accurately reference the sites it’s based on.
Not only do I think we can’t yet trust the very nature of these AI responses, but with the integration of this technology into search engines, could this further perpetuate distrust within search engine rankings?
Google works to improve its search engine algorithm to make the platform a more reliable source. However, the search engine has learned to adapt its ranking in this way. In its earlier days, deceitful tactics like keyword stuffing could get you ranking high with content that didn’t always answer search intent. This was, of course, before Google wised up to such practices.
If Google Search’s evolution is anything to go by, what’s to say introducing a new AI element into search engine functions won’t open the door to similar, devious SEO tactics as part of the integration’s growing pains?
As with many tools still in their infancy, we can’t trust AI-chat-based search in the same way as the tried, tested (and tweaked) search engine we’ve now known for decades.
Until there’s more transparency around what these AI-generated responses are built on, I believe ChatGPT and models like it still exemplify that we can’t trust everything we read on the internet.
Content by The Drum Network member:
Tug is a performance driven, global digital marketing agency, optimised to grow ambitious brands, through the smart combination of data, media, content and technology.Find out more