I’ve grown a little obsessed with Google’s Night Sight mode for Pixel cameras. Other than the original Pixel’s release, Night Sight is the largest single leap forward in mobile imaging performance, and it’s doubly impressive for being purely a software upgrade. Pixel owners, from the first-generation device to the latest Pixel 3, have had a couple of weeks to play around with it, and one of the hidden advantages of Night Sight I’ve noticed is that it makes a subtle difference in improving daytime images. In fact, I’d go so far as to say Night Sight is a little misnamed: it’d be more true to call it the Pixel’s “I’ve got time” mode.

The last dozen photos on my phone’s camera roll have all been taken with Night Sight turned on, and only a couple of them were in any sort of low light. Why is that? The impetus for me to try Night Sight in good lighting was mostly curiosity, though I’ve discovered two genuine advantages for pixel peepers like me: better white balance and greater sharpness (at least with the Pixel 3).

The only added cost of Night Sight relative to the regular camera is requiring a couple of extra seconds of exposure time to do its magic. Just as in nighttime shots, the mode is ill-suited to photographing moving objects, however, there’s no danger of the camera overexposing daylight scenes. It’s smart enough to know how much light it needs to capture a scene and delivers basically identical exposure levels to the Pixel’s default camera.

A new learning-based algorithm is responsible for handling white balance in the Pixel’s Night Sight. It works by reducing the complexities of color constancy down to a mathematical calculation (you can read Google’s technical white paper here) and then learning what Google’s engineers consider correct and incorrect white balance. Night Sight lead researcher Yael Pritch tells me that Google isn’t yet 100 percent confident in the reliability of this algorithm for all photos and circumstances, but in my experience, it’s been consistently better than the Pixel’s default. It might even be too good.

There’s a philosophical question that arises with Google’s rapidly advancing camera innovations. When the machine camera is capable of eliminating atmospheric and ambient flaws, such as a sunny haze or simple darkness, do we actually want that optimization? Sometimes the haze and unnaturalness is the exact thing you want to capture, whether it’s dust storms hitting Sydney or the effects of wildfires in California. For now, with Night Sight being optional, we don’t have to confront that conflict of realism (relative to what the human eye can see) and accuracy (relative to the underlying color), but it’s a decision we may have to make in the future.

The other appreciable Night Sight advantage in daytime will mostly be felt by Pixel 3 owners. Night Sight makes use of Google’s Super Res Zoom on the 3, which is another computational approach to improve pixel-level detail and sharpness — effectively improving the resolution of the camera — without the help of any additional hardware. This is the number one reason why I think it’s worth taking a few extra moments to shoot a Night Sight shot instead of a regular Pixel photo. The above image comparison is a really zoomed-in crop from larger photos (you can see them below) both with and without Night Sight on. The latter, with Night Sight enabled, shows much sharper lines between the bricks and better definition throughout the frame.

Another little slice from the same image shows a similar outcome, with the default Pixel camera looking like a version of the Night Sight shot with a blur filter applied to it. This is the difference that Super Res Zoom makes on the most casual of shots.

Just to illustrate the point thoroughly, here’s a third and final crop from the same image comparison. I’m relying on this pair of images for the sake of brevity, but this delta in sharpness and performance is consistent across all of my pictures. In very simple terms, turning on Night Sight is like telling Google’s software that you care very deeply about the very smallest detail of each photo, and therefore the camera should take its time to generate the best possible output.

If you’re wondering why Google hasn’t deployed at least some aspects of this Night Sight goodness to its main camera, I think there are two valid answers. One is the above philosophical issue of draining the mood from a photo for the sake of technical excellence. Another is that the main camera should be as simple and fast as possible. Night Sight adds exposure time, and the pictures it creates sometimes need a touch of additional editing. Plus, Night Sight is bad for moving objects, which I guess is another half a reason.

To both contradict and reinforce everything I’ve said above, here are the two original photos from the back of Alexandra Palace in London. At the scale of your phone’s screen, or even on a laptop’s display, you’ll be hard-pressed to spot any meaningful difference between the two, and it takes a really close investigation to appreciate the Night Sight benefits. And yet, they are there. Google’s persistent algorithmic tweaks and improvements have somehow crafted a camera better than the Pixel’s, and the company has nonchalantly thrown it in as an optional mode on the Pixel.

There’s no doubt in my mind that what we call Night Sight today is a preview of Google’s most advanced computational photography techniques, the stuff that will soon take center stage on the Pixel’s main camera. It’s a pleasingly aggressive way for the company to conduct itself in its competition against other smartphone makers. While everyone else is working overtime just to reach parity with the current Pixel camera, Google is already showing off and letting us play around with the next generation.

Photography by Vlad Savov / The Verge