Image recognition software has grown by leaps and bounds in the past few years, fueled in part by the increasing sophistication of artificial intelligence. While companies like Wolfram Alpha have put neural nets to the test in image recognition, Google has taken a special shine to it. It taught its brains to find objects. It learned to recognize faces across multiple photos. But now, it seems, Google wants to take the next logical step: background recognition.

Specifically, Google's neural nets are focusing on figuring out the location a photo was taken based on the background of the image. To do this, it created a deep database of 126 million images, all of them previously geotagged. By applying that to map data, it created a sort of visual map of the world. Around 91 million images were used to train the AI to recognize scenery. The remaining 35 million images were tested against that data set for further calibration.

From there, more images were added in, all geotagged images from Flickr. These images were entirely new to the artificial brain, some 2.3 million in all. The Google engineers knew the location; the robotic brain did not. But it made a few educated guesses.

Google's AI did pretty well. It wasn't creepy good. At least yet. It guessed the location down to the city block 3.6 percent of the time, and down to the city level 10.1 percent of the time. From there, it got the country right 28.4 percent of the time and the continent 48 percent of the time.

That may not seem impressive, but it's probably better than most humans could do just by looking at unfamiliar landmarks. In fact, the neural net, called PlaNet, beat humans 56 percent of the time in a match-up. With a more robust image set, that accuracy could go far, far up.

As for images taken indoors, the neural nets can correctly correlate them with other photos by album. While you could easily trip up the machine by jumbling up some of the photos, it may eventually be able to recognize the inside of places as well. The entire PlaNet neural net can be run on 377 MB, meaning it doesn't even require a supercomputer to run. Of course, the image database is much, much bigger. You can read the Google team's entire paper here.

Source: MIT Tech Review

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io