The Future of Work Marketing Ad Spend

How content teams can protect themselves from ChatGPT

By Ethan Hays, Senior vice-president

January 11, 2023 | 7 min read

Content marketers may be feeling the competitive heat from ChatGPT which is the shiny object du jour. Here's what they need to do to fend off the machines, according to Plus Company's Ethan Hays.

Robot

“Any sufficiently advanced technology is indistinguishable from magic,” Arthur C Clark once said.

ChatGPT feels like magic, and I am under its spell. But magic can be used for good or ill, depending on the application.

With the launch of ChatGPT, writers, content managers and content teams are in turmoil. What will this technology do to my product, my workflows, my team, my job

There are no easy answers here, because the technology is so young, and the space is so frothy. What's certain is that we’ll be seeing much more AI-generated text in the near future.

If you're in the business of content production, you need a map of the landscape, and some good tools to help you along the way. We'll aim to give you both here.

First, we'll talk a bit about what ChatGPT is, and what that means for the words it generates. Next we'll show you some practical tools you can use to detect AI-generated text.

Background

Machine learning has inflection points where it leaps so far forward that it’s stunning. ChatGPT, recently launched by OpenAI, is the shining example.

On its face, ChatGPT is a chatbot. Give it any text prompt - a question, statement, challenge - and it will generate a response ranging from a sentence, to a paragraph, to computer code.

Under the hood, ChatGPT is a Large Language Model (LLM). The core is a massive text dataset, a big bunch of words.

ChatGPT uses this massive text dataset to predict what the next sequence of words will be in a given context.

It's looking at statistical patterns and relationships between words to generate text that is coherent and realistic-sounding. Chat GPT can be surprising, funny, even poignant.

But it's just a statistical prediction engine. The words that it creates look and sound good together, but the response may be factually incorrect. From ChatGPT:

"Language models like ChatGPT are generally very good at generating coherent and realistic-sounding text, but they may not always produce output that is completely accurate or factually correct. It is important to carefully evaluate the output of any language model and to seek out additional sources of information to verify its accuracy."

So ChatGPT is... a bit of a bullshitter. The words sounds great, but the factual quality is highly variable. Machine learning experts know this already. And industry-leading organizations like DeepMind have already published papers on the risk landscape associated with LLMs:

"The third risk area comprises risks associated with LLMs providing false or misleading information. This includes the risk of creating less well-informed users and of eroding trust in shared information.

Misinformation can cause harm in sensitive domains, such as bad legal or medical advice. Poor or false information may also lead users to perform unethical or illegal actions that they would otherwise not have performed.

Misinformation risks stem in part from the processes by which LMs learn to represent language: the underlying statistical methods are not well-positioned to distinguish between factually correct and incorrect information."

The problem is that these LLMs are so good now that it's almost indistinguishable from human-generated text. In a research study from 2021, the findings were not great: “We find that, without training, evaluators distinguished between LLM- and human-authored text at random chance level.”

As we've seen over the last few years, eroding trust in shared information is a big deal! And if you work in regulated industries, this can be a real problem with real consequences.

So what are we to do as writers and content professionals?

Suggested newsletters for you

Daily Briefing

Daily

Catch up on the most important stories of the day, curated by our editorial team.

Ads of the Week

Wednesday

See the best ads of the last week - all in one place.

The Drum Insider

Once a month

Learn how to pitch to our editors and get published on The Drum.

The tools

The good news is that there are several free online tools you can use to detect AI-generated language. The next time you come across come content that seems fishy, pop it into one of these tools for a quick check.

GPTZero:

Productized version for educators: http://gptzero.me/

Free web app: https://etedward-gptzero-main-zqgfwb.streamlit.app/

Hugging Face GPT2: https://openai-openai-detector.hf.space/

Writer AI detector: https://writer.com/ai-content-detector/

These tools are not perfect. But they are an effective first step in a content verification pipeline.

Content teams that are producing content at very high volume have to deal with plagiarism all the time. Old-school plagiarism checkers like Copyscape have been part of distributed content production pipelines for decades.

As a content professional, you’ll need to plan for the flood of AI-generated text that will be coming your way. Free tools can help you ensure that you’re catching the robot text before your client does.

Ethan Hays is senior vice-president at Plus Company.

The Future of Work Marketing Ad Spend

More from The Future of Work

View all

Trending

Industry insights

View all
Add your own content +