Google's approach relies on forcing its network to learn compression the hard way. Researchers sampled six million compressed photos from the internet and broke them each 32 x 32 pixel pieces. The neural network was then fed 100 bits from each image that represented the poorest elements of its compression -- the idea being that if the network could do a better job compressing the worst of the competition, it should do a better job compressing everything.

The group's paper breaks the process down further, using math (that admittedly is beyond this writer's comprehension) to demonstrate how the network broke down images into binary code and reconstructed them piece by piece, outperforming JPEG compression at most bitrates. At least by the numbers -- human perception is a bit flighty. Even Google admits that the "human visual system is more sensitive to certain types of distortions than others," and there isn't a universally recognized metric for measuring human perception of a compressed image.

Still, the project is a big step forward in making our ever-growing libraries of media just a little smaller. And that's always a good thing.