The Google Pixel has been praised for many reasons, but it is especially popular for its camera quality. The Search Giant has gone from having the worst cameras in the industry to offering some of the best. How did they accomplish this? It turns out part of what makes the Pixel’s camera so good is that it adopts software originally made for Google Glass.

Google Glass was a very interesting project in its time. In terms of photography, users had the advantage of easy access to a first-person point of view. This was taken advantage of by casual users, doctors and even the porn industry (among others). There was a slight problem during its development, though.

Google has partnered with eyewear brands like Oakley and Ray-Ban. Will this help popularlize smartglasses or at least drive down prices?

Google wanted the Glass camera to be on par with smartphone technology, a difficult accomplishment due to restrictions in sensor, lens and overall size. It was a smaller camera, which usually results in sacrificed light intake, which means worse photos. Alphabet’s X decided to take this issue back to the drawing board and came up with a solution that later became known as Gcam.

The trick was to improve photography without improving hardware, which they did by using better software techniques during image processing. The team used a method called “image fusion”, in which the camera takes multiple shots in a row, only to merge them together and get the best out of each photo.

This will sound very familiar if you have a basic knowledge of photography – it is much like HDR. In fact, it is labeled as HDR+ in the Google Pixel’s camera options. Google’s camera technology has also started reaching other Android devices, YouTube, Google Photos and Jump (360-degree rig).

In the future, Google aims to use machine learning to improve white balance and let the software decide what to do with the background (blur, darken, lighten, etc.).

Maybe Google Glass was a success after all, right? Just not in the way Google hoped it would be.