One of the big limitations of GPS is that it doesn’t work indoors, and many of the possible solutions to this problem come from firmly within the box. In general, so-called indoor positioning systems (IPS) are just GPS indoors, taking similar concepts of triangulation and time of flight, just on a local scale. That’s fine and, hey, if it works it works, but it still seems a bit uncreative — not to mention limited. After all, such a system would mean that your ability to do indoor location would be dictated by the building’s owner, and whether they’d installed IPS infrastructure. Now, new research from Berkeley shows that the speakers and microphone on a common laptop (and, presumably, smartphone) can be used to map a room or building using echolocation, much like a bat. They call their system SoundLoc.

This isn’t quite the Dark Knight’s memorable echolocation software, but it does work quite similarly. The researchers simply have a speaker emit a specific frequency and duration of sound, and listen to overall sound field that results from reflections off the walls, furniture, human occupants, and more. The central breakthrough is really on the software side, with a specialized algorithm that filters out noise and confusing data. (If you’re curious, bats do much the same sort of filtering through their physiology, with ear and even auditory brain structures specifically tuned to amplify returning echoes relative to any other sounds.) The team was able to use their echo-mapping technique to distinguish different UC Berkeley rooms with 97.8% accuracy — not bad for an indoor positioning system.

Of course, this technology doesn’t create a visual picture of the room (though, theoretically, you might be able to get something very blocky out of the data), but instead remembers a more abstract auditory fingerprint for each room in a building. Checking a new echo-reading against this list of fingerprints (with some accepted deviation for the movement of people and objects) can quickly tell the device which room it is currently in. Perhaps more importantly, a series of speaker-microphone readers throughout the house could watch as people move through the home. Simple tracking or even height- and shape-based identification could let it distinguish people from one another — but at that point we’re getting away from the idea that this frees us from the need for infrastructure.

The oft discussed but rarely seen “smart home” can only really be smart if it knows where you are within it, and an echo-based solution might just be the easy, relatively low-tech solution we need. The big problem with it is that it requires a pre-existing map of the trackable building — though that could be easily taken care of with a quick, unprotected login to a local network hosting a collection of the building’s small echo-map files. This would probably be distinct from the WiFi network — the signal waves used for WiFi are poorly suited to location through walls. [Read: Google’s Tango smartphone uses a Kinect-like sensor to create 3D maps of the entire world.]

The paper is titled “Acoustic Method for Indoor Localization without Infrastructure,” which is nice in theory but seems unrealistic. First and foremost, there’s very little chance that this technology will work through the muffling, mic-scratching material of a purse or pants pocket, meaning that users would likely have to specifically choose to echo-map every room they enter (at least once). In general, the home would probably do just as much tracking of the user as the user does of themselves within the home. [arXiv:1407.4409 – “SoundLoc: Acoustic Method for Indoor Localization without Infrastructure”]

Still, you can’t take away the real central achievement here: the Noise Adaptive Extraction of Reverberation (NAER) algorithm that allows echo-mapping of even noisy, crowded rooms. If this technique takes off we’ll definitely see a few community-driven efforts to translate its readings into visual maps (and something tells me they would all feature a certain blue-on-black visual aesthetic) but the detail will be low. Cross-referencing multiple SoundLoc readings in real time could provide more accurate 3D mapping of objects within rooms, but at that point we’re requiring so much infrastructure that many of this technology’s advantages disappear.

More interesting are the less consumer-focused applications. Could the SWAT team be given (or force) access to a building’s network to see the locations of people in a hostage situation? Could big-data analysis constantly watch high-clearance employees for erratic movement through buildings — “Why is Human 9A4 stopping to check out every server room in the basement? Flag for security review.” Think the NSA won’t argue it can derive useful security info by mining indoor maps and real-time personal locations? Could every computer with a microphone turn into a potential room-watcher?

Read our featured story: Think GPS is awesome? IPS will blow your mind

This could easily work its way into the larger suite of “assisted GPS” technologies that currently supplement satellite-based positioning. The biggest stumbling block will be permissions, since what you would be connecting to the larger public network would be not just physical locations but physical data. Would that make you uncomfortable?