AI writer created by OpenAI researchers too powerful for release
News February 22, 2019 Tom Gainey
An artificially-intelligent writer, built by a team of researchers, could be used for ‘malicious’ purposes, it is understood.
OpenAI, is a research institute based in San Francisco and backed by the likes of Elon Musk and Peter Thiel. You may have heard of it. After all, OpenAI has been the subject of worldwide media reports in the last few days.
This comes after it shared new research on creating a system capable of producing natural language, by using machine learning. However, concern has been expressed that the tool could be used to produce convincing fake news on a grand scale.
The BBC report that, when the system is fully operational, it produces results that are “impressively realistic in tone”. The Guardian also reported how the AI fake text generator “may be too dangerous to release”.
To prevent the research falling into the wrong hands, OpenAI has not released it yet. In choosing not to, the institute is allowing more time to study and assess any potential ramifications.
We've trained an unsupervised language model that can generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training: https://t.co/sY30aQM7hU pic.twitter.com/360bGgoea3
— OpenAI (@OpenAI) February 14, 2019
What is Artificial intelligence (AI)?
Essentially, AI is the ability of a digital computer to perform tasks often associated with intelligent beings. AI platforms usually show the ability to discover meaning, learn from past experience, reason and generalise.
Taking it back to basics, you may have noticed it when playing a game of online chess. The computer is able to give you a decent game just with stored intelligence, as annoying as this can be!
Meanwhile, AI can also be used in a wealth of other areas. This can range from medical diagnosis to search engines, as well as voice and handwriting recognition.
What is OpenAI?
OpenAI is a not-for-profit research company. It’s supported by Reid Hoffman, Sam Altman and Elon Musk, among others.
Its new AI model, GPT-2, is deemed to be so good that OpenAI is breaking from the usual practice of releasing the full research to the public.
That’s because of the potential risk of malicious use. So, in stalling to release the information, OpenAI is allowing more time to discuss the ramifications of the latest technological breakthrough.
What has OpenAI said about GPT-2?
On Valentine’s Day 2019, OpenAI shared an explanatory blog post on their site. Titled ‘Better Language Models and Their Implications’, the post also shared a link to the wider paper.
It read: “We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modelling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarisation – all without task-specific training.”
“Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model.
“As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”
To read the full blog post, which continued to discuss GPT-2’s inception, click here.