Google is launching yet another crazy moonshot project. This one is a prototype called "Project Tango," which squeezes 3D computer vision technology—similar to that used in the Xbox Kinect—into a smartphone. The device is being cooked up by Google’s Advanced Technology and Projects (ATAP) group, which just moved over from Motorola. Johnny Lee, the Technical Program Lead at ATAP, described the project:

Project Tango strives to give mobile devices a human-like understanding of space and motion through advanced sensor fusion and computer vision, enabling new and enhanced types of user experiences – including 3D scanning, indoor navigation and immersive gaming.

The computer vision is enabled by a new co-processor from Movidius, called the "Myriad 1." The chip was designed from scratch to bring Kinect-style computer vision to smartphones, where size and power-draw are a huge challenge. In fact, the man quoted above, Johnny Lee, is a former Microsoft employee and worked on the Kinect technology before jumping to Google. Google's goal with Project Tango is to produce the hardware, ship the phone out to developers, and see what they come up with. TechCrunch, which was pre-briefed on the device, says Google is giving the device out to 200 developers, and signups for access start today. Developers that apply will have to pitch their ideas to Google.

The computer vision isn't meant to enable Leap Motion-style hand waving for input, but to let the phone know where it is in 3D space. The rear of the phone is packed with sensors that would allow the device to "scan" a room and build a 3D model of it, which apps could interact with. This sounds like Google is making an augmented reality platform that could really tell what is in a room, instead of crudely guessing the room geometry based on a 2D camera feed.

The Prototype device is a 5-inch Android phone with two computer vision co-processors. The rear of the prototype has a 4MP camera, a depth sensor, and a second camera for motion tracking. All of these components are on the top and bottom of the device, so you can still hold the phone slightly normally while all the 3D sensing is going on. Google says the sensors allow the device to make "over a quarter million 3D measurements every second, updating its position and orientation in real time, combining that data into a single 3D model of the space around you."

Google Glass is basically a shrunk-down Samsung Galaxy Nexus, so it's clear what the future of a project like this is. For now, the technology will only fit in a smartphone, but Glass is only a few years behind cutting-edge smartphone hardware. If you're a developer with a cool idea for this, you can pitch your idea to Google here and hope to be one of the lucky 200.