If you’ve ever boiled with inner turmoil at the failure of your Android device to recognize an “OK Google” command, you know AI speech recognition and natural language processing still have a long way to go. In many ways, it’s symptomatic of taking a disembodied, top-down approach to language that treats words as sounds, rather than experiences. However, the folks at the OpenAI group (of Elon Musk fame) have made new strides in creating an AI that uses grounded, compositional language the way we do. This is both inspirational — what could be the dawn of new era in communication — and more than a little alarming.

To better appreciate the departure this research represents, it’s important to understand the relevance of grounded, compositional language, as opposed to the canned responses offered up by Siri or Google Assistant. For decades, a faction of AI researchers have insisted that in order for AI to ever achieve something like common sense and an ability to communicate in a non-rigid fashion, it would need some form of embodiment – that is, some experiential loci from which to view and interpret its surroundings. The concept of grounded language follows from this principle and implies the ability to connect words and their meaning with someone’s individual experiences.

This is an important distinction. Imagine a blind person who has never seen the color blue interpreting the word’s meaning in a sentence. They have no reference beyond the way other people have used the word “blue” before. This could be likened to how Siri or Google Assistant responds to a query – there is no experiential basis behind the response. A grounded use of language stems from an an entities personal experiences. This is precisely the kind of language adopted by the agents in the OpenAI research project.

Compositional language, on the other hand, denotes the ability to string together multiple words to form more complex meanings. Certain monkeys for instance have different warning calls they use to differentiate between a snake and a bird of prey. But their language cannot be termed compositional, because they will never string these together to form more complex meanings, such as “the bird is carrying the snake.” The language developed by the AI agents at OpenAI, though still simple by human standards, represents advancement beyond almost anything seen in the animal kingdom.

Even more amazing, the researchers never explicitly programmed this AI communication. Instead, it “evolved” as a response to a reinforcement learning problem. While the jargon can get a bit technical, the OpenAI blog does a decent job of parsing it. The important thing to grok is the language was never defined, but rather hit upon as a solution to a general problem of learning to communicate. This type of AI method is called reinforcement learning, and involves the use of a reward signal to continually guide the agent towards an optimum outcome. It can be likened to the difference between giving someone a map up a hill, and handing them an altimeter and saying don’t stop hiking until you reach a maximum altitude. One approach lends itself to a single path, the other to a galaxy of alternatives.

It’s not surprising, therefore, that AI agents developed some truly weird methods of communication – for instance, one in which the length of the spaces between communications came to represent different meanings, not unlike Morse code. At the moment, the AI language is completely non-human, with no English equivalent. And while there has been some talk of creating a translation tool to make the language readable in English, I think it’s worth simply marveling at the weirdness of these new communications. They may represent the closest thing to an alien language we have thus far encountered.

Now read: MIT makes breakthrough in morality-proofing artificial intelligence