Some of the kids over at Georgia Tech have recently unveiled a development that takes realtime information from varied sources such as CCTV cameras and motion detectors and layers it on to Google Earth and Microsoft Virtual Earth. The result? The ability to watch things happen as they happen anywhere in the world (well– not quite anywhere just yet, but that’s the idea). While this undoubtedly reeks of “awesome,” the mind of a suspicious citizen of the 21st century automatically jumps to a future Orwellian land of Big Brothers who are this time running professional versions of Google Earth in dark offices atop mile-high, tinted-window skyscrapers.

From just a few camera angles’ video streams, the software can display 3D traffic on a highway of the expansive digital land of Google Earth. It can even estimate how the traffic runs on stretches of road that have no surveillance between camera-covered areas.

From other cameras and their angles placed around a park, the people playing American soccer can be watched from up above. This also works the same for cameras in a football stadium tracking and displaying the live feed of an American football game.

Using motion capture devices and a video feed, 3D representations of people walking about a campus as well as the cars on the street by them are also layered on Google Earth.

Turning one’s viewpoint up to the heavens in the software, a user can see what the weather is like taken from different sources and layered in the sky.

The video below shows the technology in action.

Up until this point, Google Earth’s information has always been essentially static. The satellite images are only updated every so often depending on the location, and the videos, images, links, and text has always been from the moment a user or someone with authority adds it, and it’s generally never updated to reflect the constant change as time marches on.

The technology is very new, and the students from Georgia Tech are going to debut it at the IEEE International Symposium on Mixed and Augmented Reality next month. However, one can still read the paper on the project that they’re going to present at the symposium by following this link. There don’t seem to be any specific long-term plans or goals for the technology at this time beyond perfecting it, but it’s not likely that it will stay solely in the educational field for long. In the meantime, the researchers plan to add better weather tracking, birds and animals, and motion in rivers to their project.

Despite its usefulness for informational purposes, this technology is definitely going to have to go through a lot of red tape before it shows up in commercial software and devices (if it or even part of it ever does).

Though this is a really exciting development in the realms of technology, and though one can see a myriad of uses to be applied to it, it’s obvious that some restrictions ought to be placed on such a technology before releasing it into the wild or even into the hands of a few, regardless of how “wise” or “trustworthy” the few may be. Of course, this project’s worldwide impact is restricted to what cameras’ and other informational feeds’ streams can be accessed; what relatively low percentage of city streets and sidewalks that do have surveillance can’t all necessarily be legally accessed for public or private use just like that. Still, it isn’t hard to negotiate with the varied organizations who have control of these monitoring devices, and it isn’t hard to add more.

Some of you are probably already dusting off the tin-foil hats after reading.