Artificial Intelligence will Speak Its Own Language Soon

Grounded language is a new step towards Artificial Intelligence revealed by OpenAI

The article is about a system that invents a language which is tied to perception of the world. In sum, the post reveals possibilities that might be opened via researches related to an artificial language. At least the language will be similar to a signal language typical for animals. Further languages will be evolved into more complex technologies.

There is no such thing as an evolution of languages. There is an evolution of the ability to use languages. This ability appeared about 75000 years ago. And it was extremely simple. And what we call a language today is how our language is transformed into a spoken act. As Chomsky mentioned it is a secondary language regarding essential processes of thinking. There are the variety of about 6000 different languages over the world. What we really want is to understand that an underlying principle that gives us the ability to acquire any of these 6000 languages. And create several new ones.

The language is not necessary spoken sounds but rather it is more an inner process. It’s closer to a thinking process.

The language in some sense is similar to vision.

We have a written language and we have some photos. An ability to look at an object from several prospectives is the same to asking questions for details or hidden facts. An inner dialog is the same to imagining scenes. The most interesting part is that two abilities are closer than ever on the lowest level. Also, they are built from the same material with the same principles. Discovering a system that can handle both vision and a language is the base for intelligence.

The ultimate goal is to make a system that recognizes reality via visual perception then creates abstraction. Also, the system is able to use a language for manipulations with the abstractions. The goal is to connect it in the way human mind does. I wrote more about this translation process here:

In spite of the fact that a language and vision refer to the same abstractions in the mind the source of all abstractions is the reality and that’s why we start grasping it with the simplest visual objects and not with a language. Later language-described objects become as real as what we look at. But there is no option to grasp a human language for a machine without an interaction with a physical world. That’s why OpenAI’s learning to communicate strategy is promising.

Another reason to do such researches is that there is no possibility yet to put robots into a physical world to learn the whole environment. It just will take too much time. It’s not possible to acquire a language through static data. The only way is to be an active participant in an environment. Also, there are no easy ways to make the evasive experiments with a human mind and the computer simulations are the best candidates to become a tool of linguistics in 21 century.

The goal is to create an intelligent agent that understands us. And it’s a pretty hard problem. It has been researching since 1960. However, we have not been able to describe a language formally yet because it does not exists without context. The environment is such context.