A newly developed technique could easily be used to "tap" a room by doing nothing more than studying the super-small movements of objects in the room as sound waves cause them to vibrate.

Researchers from the Massachusetts Institute of Technology, Microsoft, and Adobe have engineered a computer algorithm that can reverse engineer the audio present in a room by analyzing the super-tiny vibrations of objects in that room, according to the MIT News Office.

When sound waves hit an object, they cause that object to vibrate very subtlely. Close examination of vibrations, however — say, of the leaves on a plant — is now all that is required to get a sense of what was said or heard.

Researchers were successfully able to turn objects — like a potato chip bag, aluminum foil, and even the surface of a glass of water — into "visual microphones," where movements revealed the sounds they were being exposed to, even when looked at "from 15 feet away through soundproof glass."

Since the vibrations are so subtle, the technique was thought to require use of higher-grade cameras that can capture 2,000 to 6,000 frames per second, versus traditional cameras that usually capture 30 or 60 fps. But an oddity in the way most consumer cameras are designed means the relevant data is still present in the captured video to carry out the algorithm with useful results, despite not recording at an extremely high frame rate.

"We’re recovering sounds from objects," said Abe Davis, first author on the paper summing up the work. "That gives us a lot of information about the sound that’s going on around the object, but it also gives us a lot of information about the object itself, because different objects are going to respond to sound in different ways."

The demo video below walks you through the particulars. It's pretty crazy to think that it's reliable enough to recognize familiar songs and correctly ascertain spoken words and phrases.