See The Difference One Year Makes In Artificial Intelligence Research



Google/ Geometric Intelligence

The difference between Google's generated images of 2015, and the images generated in 2016.

Last June, Google wrote that it was teaching its artificial intelligence algorithms to generate images of objects, or "dream." The A.I. tried to generate pictures of things it had seen before, like dumbbells. But it ran into a few problems. It was able to successfully make objects shaped like dumbbells, but each had disembodied arms sticking out from the handles, because arms and dumbbells were closely associated. Over the course of a year, this process has become incredibly refined, meaning these algorithms are learning much more complete ideas about the world.

New research shows that even when trained on a standardized set of images,, A.I. can generate increasingly realistic images of objects that it's seen before. Through this, the researchers were also able to sequence the images and make low-resolution videos of actions like skydiving and playing violin. The paper, from the University of Wyoming, Albert Ludwigs University of Freiburg, and Geometric Intelligence, focuses on deep generator networks, which not only create these images but are able to show how each neuron in the network affects the entire system's understanding.

Looking at generated images from a model is important because it gives researchers a better idea about how their models process data. It's a way to take a look under the hood of algorithms that usually act independent of human intervention as they work. By seeing what computation each neuron in the network does, they can tweak the structure to be faster or more accurate.

"With real images, it is unclear which of their features a neuron has learned," the team wrote. "For example, if a neuron is activated by a picture of a lawn mower on grass, it is unclear if it ‘cares about’ the grass, but if an image...contains grass, we can be more confident the neuron has learned to pay attention to that context."

They're researching their research—and this gives a valuable tool to continue doing so.





My commentary: Things are actually progressing faster than even Kurzweil predicted— and I think I know why.

Money.

When Kurzweil wrote things like The Singularity is Near and The Age of Spiritual Machines, AI funding was absurdly low compared to where it is now. There wasn't much mainstream interest in creating AI, partially because there was no reason to create one. Even during "AI summers", there was no where near as directed of an effort to create AI as there is now. And the biggest reason why we got AI winters was because the AI they did develop during previous AI summers was never any good. There just wasn't enough computing power. So it goes like this

High funding + lack of success = disappointment & AI autumns :: Low funding + lack of success = AI winters. Then something happens to reinvigorate the field, and the cycle repeats itself.

What changed in recent years?

We now have the computing power needed to make all these otherwise decades-old methods work. Artificial intelligence research hasn't really progressed since the 1970s— there aren't any new methods, new ideas, new concepts. However, as we've been seeing in recent months, perhaps that's not the problem. Perhaps the old methods we have are exactly what we needed all along, and it's only our computers' inability to run them fast enough that led to such great disappointment.

Starting around 2009, we entered another AI summer. What makes this one different from all the rest?

We're actually seeing results.

For the first time ever, the cycle has been broken. High funding + success = excitement and higher funding. Investment into AI has never been greater than it is now, and the best part about all this is that we have a veritable computer superpower behind us: Alphabet. Google more specifically, but Alphabet nonetheless.

Alphabet has been throwing billions of dollars at the AI problem in the past few years. I wouldn't be surprised if they invested more into AI in the past 5 years than there has been investment in AI between 1955 and 2010. And it's paying off.

Back in 2010, you couldn't talk about AI or robotics without tongues boring through cheeks and every sentence ending with a chuckle or jokey comment about Skynet. And while we still have comments about Skynet, all the rest has since vanished— serious conversations about the progression of AI and robotics are everyone. From the World Economic Forum and Congress all the way to church groups and school debates. All that in just 6 years.

I said that AI progress in 2016 was going to make 2015— one of the most astounding years for AI— look like nothing happened. I was right. The year isn't half over, and already we've made history. People will be talking about the match between AlphaGo and Lee Sedol for centuries, just like how we're still talking about the match between Deep Blue and Garry Kasparov or when IBM Watson decimated humans at Jeopardy.

Looking into the short term: what does this mean for 2017? My prediction right now is that we're going to see more AI progress in 3 months than all progress made between 2010 and 2016.

I'm not lying when I say I fully expect the world's top deep learning systems 1 year from now to be able to do some fantastic things. Things like playing Halo: Combat Evolved. That sounds like a joke, but it's really not. One thing AI needs to be able to do in order to be useful in the real world is to be able to navigate 3D space. AI has all but mastered 2D games. When DeepMind is able to master Doom, GoldenEye, Halo, Mirror's Edge, those sorts of games, we'll know that we're just about ready to test DeepMind with a robotic body.

But what really needs to happen? Putting AI to work in developing AI. We've heard of things like genetic algorithms, no? When we get DeepMind ready to program itself, that's when we'll really begin noticing an intelligence explosion.