AI text writing technology too dangerous to release, creators claim
An artificially intelligent system trained to mimic natural human language has been deemed too dangerous to release by its creators.
Researchers at OpenAI say they have created an AI writer which "generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks and performs rudimentary reading comprehension, machine translation, question answering and summarisation — all without task-specific training."
But they are withholding it from public use "due to our concerns about malicious applications of the technology".
OpenAI fears its text technology could be used maliciously
They cited dangers such as the technology being used for generating misleading news articles and impersonate others online.
"Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights," Open AI said in a blog post.
"This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas."
The latest marketing news and insights straight to your inbox.
Get the best of The Drum by choosing from a series of great email briefings, whether that’s daily news, weekly recaps or deep dives into media or creativity.Sign up
OpenAI was founded in 2015 with $1bn backing from Elon Musk and others. It is calling on governments to "consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems".