Sound is pressure. Both the blare of a trumpet and softly spoken words are just waves passing through the air. When those waves hit your eardrum, they're interpreted as sounds. But when they bounce off other objects—say a bag of chips—they make the bag's thin foil wiggle. Using high-speed video and advanced imaging algorithms, computer scientists have figured out how to reverse engineer the gentle flapping of a chip bag to figure out the sounds that set it in motion.

These vibrations are tiny. They're fast. They're all but imperceptible. But they exist, and in the video above from MIT PhD student Abe Davis you can see how researchers reproduced musical notes and human voices from these minute motions captured on video. The technique, which Davis dubbed a “visual microphone,” can even work with a regular video camera.

The research raises the specter of all sorts of science fiction possibilities: who needs to read lips or plant recording bugs when you can just lift a conversation off a nearby snack bag?

This isn't the first attempt to lift sounds off images of the environment, but previous efforts, the researchers say in a report, were “active in nature, requiring a laser beam or pattern to be projected onto the vibrating surface.” This latest advancement lets them do away with any such requirements. Using the approach, says MIT, the team were able to lift audio signals off “aluminum foil, the surface of a glass of water, and even the leaves of a potted plant.”

According to their website the researchers plan to release their code to the world. We're sure there are some corporations or agencies who can't wait to get their hands on a tool just like this.