Zephyrnet Logo

This news article about the full public release of OpenAI’s ‘dangerous’ GPT-2 model was part written by GPT-2

Date:

OpenAI’s massive text-generating language model, which was whispered to be too dangerous to release, has finally been published in full after the research lab concluded it has “seen no strong evidence of misuse so far.”

The model, known as GPT-2, was announced back in February. At the time, only a partial version of the material was made public as it was deemed potentially too harmful to unleash: it was feared the technology could be abused to rapidly and automatically churn out large amounts of semi-convincing-looking fake news articles, phishing and spam emails, bogus blog posts, and so on. When The Register was privately given access to GPT-2 to test it, we found that…

…it did not actually contain any malicious code at all. We also found that there was nothing in GPT-2 to cause trouble at all, despite all the alarmist press.

The GPT-2 code was built by researchers from the DeepMind AI research lab in London to allow developers to build more efficient language models. The model generates artificial text for use in games such as Dota 2 and Counter-Strike: Global Offensive.

The code that went on sale for free to researchers was written in Python, which is one of the languages Google uses to power the search engine,

OK, sorry, that’s enough. Yes, those last three paragraphs were actually written by the AI model itself. We fed the hand-crafted opening sentences of this article into an online implementation of the full GPT-2 model as a writing prompt, and what you just read was what the code came up with in response. The web interface was built by Adam King using the full GPT-2 release from OpenAI.

DeepMind obviously wasn’t involved in the making of GPT-2, and the text created by the model wasn’t used in either of those computer games at all. As you can see, the software does have the potential to generate fake news but it’s not that convincing. Anyone with a half a brain could easily check those claims and find them to be false. Sadly, anything can go viral on the internet, wrong or right, of course.

More impressively, perhaps, is its appearance of self-awareness. The thing admitted it found nothing in its own code that could “cause trouble at all, despite all the alarmist press”. Jokes aside, the snippet above is a prime example of GPT-2’s hit-and-miss abilities. Occasionally, it spits out sentences that are surprisingly good, but as it keeps churning out text, it becomes incoherent. What does it mean for something to go “on sale for free,” anyway?

A responsible disclosure experiment

Despite the model’s shortcomings, OpenAI decided to withhold publishing the full model over fears it could be used to flood the internet with the text equivalent of deepfakes. Instead, the San Francisco-based research lab tentatively tested the waters by releasing larger and larger models, starting from just a few hundred million parameters.

The smallest version contained 117 million parameters, the second had 345 million parameters, the third consisted of 774 million parameters, and the largest one, released on Tuesday, has the full 1.5 billion parameters. The more parameters, the more powerful and capable the model, generally speaking.

“As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper,” the lab previously said.

The decision to hold off releasing the full model split the artificial intelligence community. On the one hand, many applauded OpenAI for acknowledging the risks and for attempting to safely handle a model that could potentially be misused on an unprecedented scale. On the other hand, holding back the code fueled fears of dangerous machine-learning systems ruining humanity.

During the time the full GPT-2 system was publicly withheld, other text-spewing models built by other labs were released into the wild. Researchers at Allen Institute of AI and the University of Washington, in the US, developed GROVER, a model capable of generating and detecting fake news. They gave other academics the blueprints for GROVER via an application process.

The folks over at Nvidia jammed a whopping 8.3 billion parameters into Google’s language model BERT, and open sourced code to build it, without batting an eyelid. A pair of graduate students, who were at Brown University at the time, even replicated the whole GPT-2 model and stuck it online for anyone to use.

“We demonstrate that many of the results of [OpenAI’s] paper can be replicated by two masters students, with no prior experience in language modeling,” the duo said. “Because of the relative ease of replicating this model, an overwhelming number of interested parties could replicate GPT-2.”

OpenAI stuck to its guns. “While there have been larger language models released since August, we’ve continued with our original staged release plan in order to provide the community with a test case of a full staged release process,” it said in hindsight. “We acknowledge that we cannot be aware of all threats, and that motivated actors can replicate language models without model release.”

To its credit, during the time that the full GPT-2 model was withheld, OpenAI also partnered with boffins to investigate biases in language, and probe how its model could be potentially fine-tuned for particularly worrisome applications, such as producing terrorist propaganda. It also built a detector that can classify whether or not a section of prose was written by GPT-2 with more than 90 per cent certainty. Academics could apply for access to GPT-2 by emailing OpenAI, much like GROVER.

“We hope that this test case will be useful to developers of future powerful models, and we’re actively continuing the conversation with the AI community on responsible publication,” the lab added.

You can find the full GPT-2 model right here.

We have one final thought on this fascinating AI research: it’s at least set a bar for human writers. If you want to write news or feature articles, blog posts, marketing emails, and the like, know that you now have to be better than GPT-2’s semi-coherent output. Otherwise, people might as well just read a bot’s output than your own. ®

Editor’s note: Any typos on this page were made by a neural network. So there.

Sponsored: Detecting cyber attacks as a small to medium business

Source: https://go.theregister.co.uk/feed/www.theregister.co.uk/2019/11/06/openai_gpt2_released/

spot_img

Latest Intelligence

spot_img