The Ghostly Gaze Illusion provides clues about how the brain processes visual and social information. Image: Rob Jenkins/ University of Glasgow



Look at the image above. Are the women gazing at you, or at each other? Both answers are correct, depending on how far away you are from the image. From a distance, the twin sisters appear to be gazing at each from the corners of their eyes, but move closer to it, and they appear to be looking straight ahead.

The somewhat creepy Ghostly Gaze Illusion, created by psychologist Rob Jenkins of the University of Glasgow, was awarded second prize in the 2008 Illusion of the Year Contest. It is an example of a hybrid image, in which two slightly different photographs are superimposed onto each other. How does the illusion work, and what does it tell us about how the brain processes visual and social information?



"The process begins with two identical photographs that differ only in their gaze direction," Jenkins explains. "We take the fine, detailed information from one, and the coarse 'blobby' information from the other. When we overlay these two components, we arrive at the final image, in which different percepts dominate at different distances."

How the brain interprets complex visual stimuli such as faces is a long-standing mystery for researchers. The process occurs extremely rapidly – the "meaning" of a scene is interpreted within 1/20th of a second, and, even though the information processed by the brain may be incomplete, the interpretation is usually correct.

Occasionally, however, visual stimuli are open to interpretation. This is the case with ambiguous figures, which can be interpreted in more than one way. When an ambiguous image is viewed, a single image impinges upon the retina, but higher order processing in the visual cortex leads to a number of different interpretations of that image, only one of which enters conscious awareness at any one time. Repeated viewing of the image leads to perceptual reversal, whereby first one, and then the other, interpretation is perceived. Ambiguous figures therefore provide a means by which the functioning of the human visual system can be investigated.

Slave Market with the Disappearing Bust of Voltaire, by Salvador Dali. Photograph: Getty Images

Salvador Dali's 1940 painting Slave Market with the Disappearing Bust of Voltaire (above) is an example of an ambiguous figure. In this painting, the two nuns just left of centre can also be perceived as the bust of the French writer and philosopher Voltaire. When looking at the painting, our perception switches from one interpretation to the other.

In a study published in 2002, Lizann Bonnar, then at the University of Glasgow, and her colleagues, investigated the stimuli which drive perception of the visual scene depicted in Dali's painting. Participants were presented with a cropped greyscale version of the painting, consisting solely of the area containing the nuns. A "bubble" filter was used to enhance or obscure certain features of that part of the painting. They found that the participants reported seeing the bust of Voltaire when the finer details of the painting were obscured, and reported seeing the nuns when large scale features were obscured.

This experiment showed the importance of scale information in perception. The researchers specifically manipulated the spatial resolution of the painting (that is, the periodicity with which image intensity changes). Large scale features change little over a given distance, and therefore have a low spatial resolution, while fine-grained features change much more over the same distance, and so have a high spatial resolution.

In a second experiment, the participants were shown random noise patterns before the cropped greyscale painting. One group was shown a pattern with a high spatial resolution, the other a pattern with a low spatial resolution. Afterwards, the former reported seeing the bust of Voltaire, while the latter reported seeing the nuns. This showed that previous experience is an important factor in perception. The participants had selectively perceived the frequency channels presented to them before they viewed the image.

Hybrid images are a slightly different kind of ambiguous figure, and provide further clues about how the brain processes visual information. They were first created in the mid-1990s by Phillipe Schyns of the University of Glasgow and his then PhD student Aude Oliva, now head of the Computational Visual Cognition Laboratory at MIT. Schyns and Oliva used specialized filtering software to remove sharp facial features, such as wrinkles and other blemishes, from one image and coarse features, such as the shape of the mouth or nose, from the other. The two images are then superimposed.

Features with a high spatial frequency are visible only from up close, whereas those with low spatial frequencies are only visible from further away. Overlaying the two photos therefore results in a single image that can produce two stable percepts. Only one of the images is visible at a given distance, and it is this image that dominates processing in the visual system; the other is perceived as something lacking internal organization.

Schyns and Oliva have created several dozen hybrid images, the best known being the one of Marylin Einstein. They have used the images to investigate the role of different frequency channels for image recognition, and the time course over which this process occurs. When participants are shown hybrid images for durations of 30 milliseconds, they recognize only the low spatial resolution component of the image, but when the images are displayed for 150 milliseconds, they only recognize the high spatial resolution component. In both cases, they are completely oblivious to the other interpretation of the image.

The researchers have also shown participants hybrid images consisting of sad and angry faces (with high and low spatial resolution, respectively) of men and women. When these images are displayed for 50 milliseconds, and the participants are asked to determine the emotion of the face they saw, they always report seeing the angry face. But when asked to determine the sex of the person in the image, they report seeing a male as often as they reported seeing a female, even though the two faces have different spatial resolutions.

Thus, selection of frequency bands during fast image recognition appears to be flexible: in some cases, the brain picks out characteristics with a low spatial resolution, while in others, it discriminates those with a high resolution. It seems that the brain is adept at selecting the frequency band containing the most information relevant to a particular task. Again, the participants were unaware that the images they viewed contained information in the other frequency range.

Oliva's work shows that the brain extracts large-scale features slightly earlier than fine-grained features. Large scale features are processed within 50 milliseconds, giving an overall impression of the visual scene. The processing of fine-grained details begins slightly later, at around 100 milliseconds.

The fine- and coarse-grained features are extracted separately, and processed in parallel through different channels, in successively higher order areas of the visual cortex. In a process called perceptual grouping, the information from the channels is then seamlessly recombined at visual cortical areas of the highest order to produce a coherent, and usually unambiguous, image.

Th Ghostly Gaze Illusion also tells us a bit about how the brain processes social information. Being highly social animals, we are particularly sensitive to eye gaze direction, as it provides important information about other peoples' intentions and is therefore crucial for our interactions with them. The Ghostly Gaze Illusion overturns assumptions about how the brain uses light intensity cues from the eyes to determine gaze direction.

"It is often assumed that we tell where other people are looking by following the darkest regions of their eyes," says Jenkins. "The Ghostly Gaze illusion shows that this is not mandatory, and that we will follow the lighter side of the eyes instead if they contain convincing cues such as the curved outline of the pupil."

References: Jenkins, R. (2007). The lighter side of gaze perception. Perception, DOI: 10.1068/p5745

Bonnar, L. et al. (2002). Understanding Dali's Slave Market with the Disappearing Bust of Voltaire: A case study in the scale information driving perception. Perception, DOI: 10.1068/p3276

• This article was amended on 20 September to give the correct attributions of the researchers. The concluding quote was also amended – the original stated that it is often assumed that we tell where people are looking by "following the pupil and iris region".