Humans are not alone in their quest to "see" human faces in the sea of visual cues that surrounds them. For decades, scientists have been training computers to do the same. And, like humans, computers display pareidolia.

Though there is something basely human about the tendency to see faces in the non-human shapes around us, to anthropomorphize odd pieces of hardware or rocks on a hillside, that computers see humans where there are none should not be all too surprising. Facial-recognition software is a tough technological feat, and in the process, computers are bound to come up with false positives. Does this make the computers more like us? Have they taken on our most human cognitive errors? In a superficial sense, yes, computers do make errors that are similar to pareidolia, and this seems very human. But as you look into these computer false-positives a bit more, you find a different story.

In an awesome little creative trick, New York University researcher Greg Borenstein applied open-source software FaceTracker to a Flickr pool of examples called Hello Little Fella. In some instances, FaceTracker found a face just where you or I would:

Like a human, the computer has found a false-positive. That humans and computers share some instances of pareidolia seems to underscore the human-like nature of those computers, brought about by their human-led training. In that sense, a computers' errors make the computers seem somehow more human.

But maybe the reason a computer "sees" a face in that key is very simple: Things around us do sometimes actually have the shapes that constitute a face. How can we say this is pareidolia, a strange phenomenon that is supposedly the byproduct of millions of years of evolution, and not just the basic truth that sometimes shapes do look like things they are not?

A project from Phil McCarthy called Pareidoloop pushes us to think about these questions. By combining random-polygon-generation software and facial-recognition software, McCarthy's program builds its own series of randomly generated faces. Out of layers upon layers of mish-mashed shapes, the software "recognizes" the faces, and the fine tunes them into human likenesses. (McCarthy notes that a lot of them kind of resemble old pictures of Einstein.)





The computer is "seeing" faces where there are just random shapes! But wouldn't anyone? The results are clearly faces, so much so that recognizing them as such cannot be labeled pareidolia any more so than recognizing faces in a painting of a face is pareidolia. Where is that line? If it's pareidolia to see a face in the two windows and door of a house, why not in a sketch of two eyes and a nose? Faces are, after all, just a series of well arranged polygons. We'll see them in the world around us because sometimes, inevitably, shapes will be arranged in the formation of two eyes, a nose, and a mouth. How can we identify pareidolia in a way that is distinct from the "accurate" identification of an artistic representation of a face? How can we say pareidolia is a phenomenon of the human mind at all?