The Drum Awards Festival - Extended Deadline

-d -h -min -sec

Artificial Intelligence Media

AI text writing technology too dangerous to release, creators claim


By Cameron Clarke, Editor

February 17, 2019 | 2 min read

An artificially intelligent system trained to mimic natural human language has been deemed too dangerous to release by its creators.

AI text writer

OpenAI fears its text technology could be used maliciously

Researchers at OpenAI say they have created an AI writer which "generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks and performs rudimentary reading comprehension, machine translation, question answering and summarisation — all without task-specific training."

But they are withholding it from public use "due to our concerns about malicious applications of the technology".

They cited dangers such as the technology being used for generating misleading news articles and impersonate others online.

"Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights," Open AI said in a blog post.

"This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas."

OpenAI was founded in 2015 with $1bn backing from Elon Musk and others. It is calling on governments to "consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems".

Artificial Intelligence Media

More from Artificial Intelligence

View all


Industry insights

View all
Add your own content +