Trying to understand how new technologies will shape our lives is an exercise in managing hype. When technologists say their new invention has the potential to change the world, you’d hardly expect them to say anything else. But when they say they’re so concerned about its potential to change the world that they won’t release their invention, you sit up and pay attention.

This was the case when OpenAI, the non-profit founded in 2015 by Y Combinator’s Sam Altman and Elon Musk (amongst others), announced its new neural network for natural language processing: the GPT-2. In a blog post, along with some striking examples of its work, OpenAI announced that this neural network would not be released to the public, citing concerns around its security.

More Data, Better Data

In its outline, GPT-2 resembles the strategy that natural language processing neural networks have often employed: trained on a huge 40GB text sample drawn from the internet, the neural network statistically associates words and patterns of words with each other. It can then attempt to predict the next word in a sequence based on previous words, generating samples of new text. So far, so familiar: people have marveled at the ability of neural networks to generate text for some years. They’ve been trained to write novels and come up with recipes for our amusement.

But GPT-2 appears to be a step ahead of its predecessors. It’s not entirely clear why, in part due to the refusal to release the whole model; but it appears to simply represent a scaling-up of previous OpenAI efforts, using a neural network design that has existed for a couple of years. That means more CPU hours, more fine-tuning, and a larger training dataset.

The data is scraped from the internet, but with a twist: the researchers kept the quality high by scraping from outbound links from Reddit that got more than three upvotes—so if you’re a Reddit user, you helped GPT-2 find and clean its data.

The work of previous RNNs (Recurrent Neural Networks) often felt as if the vast samples of classic literature, or death metal band names, or Shakespeare, had been put through a blender then hastily reassembled by someone who’d only glanced at the original.

This is why talking to AI chatbots can be so frustrating; they cannot retain context, because they have no innate understanding of anything they’re talking about beyond these statistical associations between words.

GPT-2 operates on similar principles: it has no real understanding of what it’s talking about, or of any word or concept as anything more than a vector in a huge vector space, vastly distant from some and intimately close to others. But, for certain purposes, this might not matter.

Unicorn-Chasing

When prompted to write about unicorns that could speak English, GPT-2 (admittedly, after ten attempts) came up with a page of text like this:

“Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

“While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

“However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution or at least a change in social organization,” said the scientist.”

What’s really notable about this sample is the overarching structure of it: it reads almost exactly as a normal scientific article or write-up of a press release would. The RNN doesn’t contradict itself or lose its flow in the middle of a sentence. Its references to location are consistent, as are the particular “topics” of discussion in each paragraph. GTP-2 is not explicitly programmed to remember (or invent) Dr. Perez’s name, for example—yet it does.

The unicorn sample is a particularly striking example, but the RNN’s capabilities also allowed it to produce a fairly convincing article about itself. With no real understanding of the underlying concepts or facts of the matter, the piece has the ring of tech journalism, but is entirely untrue (thankfully, otherwise I’d be out of a job already).

The OpenAI researchers note that, like all neural networks, the computational resources used to train the network and the size of its sample determine its performance. OpenAI’s blog post explains: “When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50 percent of the time.”

Rewriting the World

However, when trained on specifically-selected datasets for narrower applications, the AI becomes more convincing. An example of the niche applications the OpenAI researchers trained the model to perform was writing Amazon reviews. This kind of convincing generation of online content was what led OpenAI to decide not to release the algorithm for general use.

This decision has been controversial, with some cynics suggesting that it’s a publicity stunt designed to get more articles written to overhype OpenAI’s progress. But there’s no need for an algorithm to be particularly intelligent to shape the world—as long as it’s capable of fooling people.

Deepfake videos, especially in these polarized times, could be disruptive enough, but the complexity of a video can make it easier to spot the “artefacts,” the fingerprints left by the algorithms that generate them.

Not so with text. If GPT-2 can generate endless, coherent, and convincing fake news or propaganda bots online, it will do more than put some Macedonian teens out of a job. Clearly, there is space for remarkable improvements: could AI write articles, novels, or poetry that some people prefer to read?

The long-term impacts on society for such a system are difficult to comprehend. The time is well overdue that the machine learning field abandon its ‘move fast and break things’ approach in releasing algorithms that have potentially damaging social impacts. An ethical debate about the software we release is just as important as ethical debates about new advances in biotechnology or weapons manufacture.

GPT-2 hasn’t yet eliminated some of the perennial bugbears associated with RNNs. Occasionally, for example, it will repeat words, unnaturally switch topics, or say things that don’t make sense due to poor word modeling: “The fire is happening under water,” for example.

Unreasonable Reason

Yet one of the most exciting aspects of the RNN is its apparent ability to develop what you might call “emergent skills” that weren’t specifically programmed. The algorithm was never explicitly programmed to translate between languages or summarize longer articles, but can have a decent stab at both tasks simply based on the enormity of its training dataset.

In that dataset were plenty of examples of long pieces of text, followed by “TL;DR.” If you prompt GPT-2 with the phrase “TL;DR”, it will attempt to summarize the preceding text. It was not designed for this task, and so it’s a pretty terrible summarizer, falling well short of how the best summarizing algorithms can perform.

Yet the fact that it will even attempt this task with no specific training shows just how much behavior, structure, and logic these neural networks can extract from their training datasets. In the endless quest to determine “which-word-comes-next” as a byproduct, it appears to develop a vague notion of what it is supposed to do in this tl;dr situation. This is unexpected, and exciting.

You can download and play with a toy version of GPT-2 from Github.

Image Credit: Photo Kozyr / Shutterstock.com