Psychedelic Background Getty Images

At Google, a team of researchers recently demonstrated an artificially intelligent system that could reliably identify a mountain-unicycling video.

As another Google researcher put it: "Who knew mountain unicycling was a thing?" But the implications of this system extend well beyond the realm of obscure outdoor sports. Making use of a technology called recurrent neural nets, it pointed to a near future where our artificially intelligent machines include a kind of artificial short-term memory. Basically, the system could identify the mountain unicycling because it could "remember." As it examined each frame of a video, it could look back at frames it had seen in the past.

Recurrent neural nets, or RNNs, can not only recognize complex moving images, but automatically generate detailed captions for online photos and videos, improve online services that translate from one language to another, and more. They're pushing into companies like Facebook and Baidu as well as Google, and in recent weeks, this burgeoning technology received another shot in the arm with the arrival of a new startup called Nnaisense.

According to the company's website, Nnaisense was founded by Jürgen Schmidhuber, a key figure in the development of modern RNNs, and four researchers who work alongside him at the Swiss AI lab called IDSIA (Istituto Dalle Molle di Studi sull'Intelligenza Artificiale). The German-born Schmidhuber helped create a breed of recurrent neural net called LSTM, or Long Short Term Memory, and his work has influenced the latest AI research at the likes of Google, Microsoft, IBM, and others.

Neither Schmidhuber nor Nnaisense immediately responded to requests to discuss the company's aims. The company is still very new. It registered the nnaissense.com internet domain this spring, and in June, it filed a trademark application for the nnaisense name (a nod to "neural networks" and "artificial intelligence"). As it stands, the company's website says its mission is to "build large-scale neural network solutions for superhuman perception and intelligent automation, with the ultimate goal of marketing general-purpose neural network-based Artificial Intelligences."

In other words, it's trying to do pretty much what Google and Facebook and Baidu are trying to do. But it's notable that Schmidhuber is entering the arena. The leading internet companies are jockeying for top talent in the field of "deep learning," a form of artificial intelligence that includes recurrent neural nets, and even without a product, Nnaisense is a potential target.

In recent years, Google and Facebook snapped up two notable names in the field, Geoff Hinton and Yann LeCun. This month, IBM inked an agreement with another, University of Montreal professor Yoshua Bengio. And the others, like Twitter, have grabbed various researchers who studied under those Big Three.

Schmidhuber and his colleagues represent another talent pool. In fact, they may be looking for a place inside one of the giants of the net, which can provide not only the money that can fuel more advanced research in this field, but also the enormous amounts of digital data needed to drive deep learning services. "The trend is: researchers going towards industry," says Adam Gibson, the co-founder of a deep learning startup called Skymind. "These guys want to see their research applied."

Machines With Memory

Deep learning is an umbrella term used to describe the use of particularly complex neural networks—networks of machines that mimic the web of neurons in the human brain. Basically, if you feed these systems large amounts of data, they can "learn" to perform certain tasks. If you feed them cat photos, for instance, they can learn to identify a cat.

Using "convolutional neural nets," Facebook can now recognize faces in photos posted to its social network. Google uses "convnets" to recognize commands you speak into your Android phone. At Baidu, they help drive a kind of visual search engine.

Convnets are remarkably effective, and they can help with a wide range of tasks, including everything from ad targeting to language translation. But recurrent neural nets can potentially take the state of the art even further. Whereas a convnet accepts a single type of input (images, say) and spits out a single output (what category an image falls in to), an RNN can ingest multiple inputs and deliver multiple outputs.

"Recurrent neural networks can operate with sequences," says Andrej Karpathy, a Stanford University deep learning researchers who previously interned with one of Google's AI groups. "They can make observations over time, and then modify their internal operations based on that."

One way to think about this is that RNN exhibit something akin to a short-term memory. Facebook's LeCun refers to this as a "scratch pad." While the neural net is examining one thing, it can keep another in mind. It can use one input to influence its analysis of another.

"They remember what they just saw, like the previous word in a sentence, and they use that to affect what they think the next word is," says Skymind co-founder Chris Nicholson. "Unlike other neural networks, they include this internal feedback loop where their past experience directly impacts current activity, a bit like we rely on our memories to know how to respond to the world."

So, a recurrent neural net can collectively examine the many frames of a mountain unicycling video. It can analyze the many tiny pixels that make up a photo, in an effort to generate a descriptive caption. It can analyze the many words that make up a paragraph describing The Lord of the Rings, so it can later answer questions about the Tolkien novels.

The Acquihire Opportunity

Now, one of the academic researchers behind this technology, Schmidhuber, is moving beyond academia. "Juergen has been working with the topic for a very long time but until now has not been associated with a company," Karpathy says.

It's unclear what applications Nnaisense will tackle. And for that reason, David Luan, the CEO of AI startup Dextro, is reserving judgment. "From a business perspective," he says, "it's still to be seen whether they choose to pursue a targeted problem with a tailored product or whether they are instead aiming to develop technology that can eventually be acquired and integrated into a larger company, as many research-oriented general AI startups do."

An acquisition may indeed be the company's aim—or at least one of them. Google acquired DNNresearch, the AI startup founded by Geoff Hinton, as well as the Deepmind startup founded by several researchers in England. Twitter acquired two other young deep-learning startups. Asked about Nnaisense, Gibson says: "This reminds me a lot of what Hinton did with DNNresearch."