But it’s not just that the camera knows there’s a face and where the eyes are. Cameras also now capture multiple images in the moment to synthesize new ones. Night Sight, a new feature for the Google Pixel, is the best-explained example of how this works. Google developed new techniques for combining multiple inferior (noisy, dark) images into one superior (cleaner, brighter) image. Any photo is really a blend of a bunch of photos captured around the central exposure. But then, as with Apple, Google deploys machine-learning algorithms over the top of these images. The one the company has described publicly helps with white balancing—which helps deliver realistic color in a picture—in low light. It also told the Verge that “its machine learning detects what objects are in the frame, and the camera is smart enough to know what color they are supposed to have.” Consider how different that is from a normal photograph. Google’s camera is not capturing what is, but what, statistically, is likely.

Picture-taking has become ever more automatic. It’s like commercial pilots flying planes: They are in manual control for only a tiny percentage of a given trip. Our phone-computer-cameras seamlessly, invisibly blur the distinctions between things a camera can do and things a computer can do. There are continuities with pre-existing techniques, of course, but only if you plot the progress of digital photography on some kind of logarithmic scale.

High-dynamic range, or HDR, photography became popular in the 2000s, dominating the early photo-sharing site Flickr. Photographers captured multiple (usually three) images of the same scene at different exposures. Then, they stacked the images on top of one another and took the information about the shadows from the brightest photo and the information about the highlights from the darkest photo. Put them all together, and they could generate beautiful surreality. In the right hands, an HDR photo could create a scene that is much more like what our eyes see than what most cameras normally produce.

Our eyes, especially under conditions of variable brightness, can compensate dynamically. Try taking a picture of the moon, for example. The moon itself is very bright, and if you try to take a photo of it, you have to expose it as if it were high noon. But the night is dark, obviously, and so to get a picture of the moon with detail, the rest of the scene is essentially black. Our eyes can see both the moon and the earthly landscape with no problem.

Google and Apple both want to make the HDR process as automatic as our eyes’ adjustments. They’ve incorporated HDR into their default cameras, drawing from a burst of images (Google uses up to 15). HDR has become simply how pictures are taken for most people. As with the skin-smoothing, it no longer really matters if that’s what our eyes would see. Some new products’ goal is to surpass our own bodies’ impressive visual abilities. “The goal of Night Sight is to make photographs of scenes so dark that you can’t see them clearly with your own eyes — almost like a super-power!” Google writes.