AI text writing technology too dangerous to release, creators claim

OpenAI fears its text technology could be used maliciously

An artificially intelligent system trained to mimic natural human language has been deemed too dangerous to release by its creators.

Researchers at OpenAI say they have created an AI writer which "generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks and performs rudimentary reading comprehension, machine translation, question answering and summarisation — all without task-specific training."

But they are withholding it from public use "due to our concerns about malicious applications of the technology".

They cited dangers such as the technology being used for generating misleading news articles and impersonate others online.

"Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights," Open AI said in a blog post.

"This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas."

OpenAI was founded in 2015 with $1bn backing from Elon Musk and others. It is calling on governments to "consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems".

Join us, it's free.

Become a member to get access to:

  • Exclusive Content
  • Daily and specialised newsletters
  • Research and analysis

Join us, it’s free.

Want to read this article and others just like it? All you need to do is become a member of The Drum. Basic membership is quick, free and you will be able to receive daily news updates.