BERKELEY-- Scientists at the University of California, Berkeley, have recorded signals from deep in the brain of a cat to capture movies of how it views the world around it. The images they reconstructed from the recordings were fuzzy but recognizable versions of the scenes that played out before the cat's eyes. The team recorded signals from a total of 177 cells in the lateral geniculate nucleus - a part of the brain's thalamus that processes visual signals from the eye - as they played a digitized movie of indoor and outdoor scenes for the cat. Using simple mathematical filters, the researchers decoded the signals to generate a movie of what the cat actually saw. The reconstructed movie turned out to be amazingly faithful to the original. "This work demonstrates that we have a reasonable understanding of how visual information is encoded in the thalamus," said Yang Dan, assistant professor of neurobiology at UC Berkeley. Theoretically, if someone could record from many more cells - the lateral geniculate nucleus contains several hundred thousand nerve cells in all - it should be possible to reconstruct exactly what the animal sees, she said. The results were reported in the Sept. 15 issue of the "Journal of Neuroscience" by Dan; former postdoctoral fellow Garrett B. Stanley, now an assistant professor at Harvard University; and Princeton University undergraduate Fei Fei Li, who will be a graduate student next year at the California Institute of Technology. Dan sees the demonstration not only as confirmation of our current understanding of how thalamic cells process signals from the retina, but also as a step toward a larger goal of understanding how the entire brain works. Such understanding is critical to discovering the causes of brain diseases and mental illness. "Fundamental understanding of brain processes is crucial to understanding illness and eventually could help us come up with treatments," she said. The current understanding of how cells in this part of the brain respond to visual stimuli has been pieced together over decades by many researchers working with animals. The results show that this approach works. "Our goal is to understand how information is processed in the brain, how it is encoded," Dan said. "By working backward, using the firing of nerve cells to reconstruct the original scene, we can see where we have been successful and where we haven't. "We aren't the first to use this decoding technique, but instead of decoding the signals one at a time, we did it simultaneously to get a movie image of what the cat saw." The lateral geniculate nucleus is only the first stop for visual signals on the way to the brain. Higher areas of the brain, in the cortex, do much more processing of signals. Much work is still necessary to understand the details of such processing, Dan said. This is the main subject of study in her lab. Dan and her colleagues digitized eight short (16-second) black-and-white movies of scenes ranging from a forest and tree trunks to a face. They then played these in front of an anesthetized cat while recording from cells in the lateral geniculate nucleus. Cats were chosen because they have excellent vision. Even though cats have primitive color vision, the group used low-resolution (64 by 64 pixels) black-and-white images to simplify the experiment. Since the researchers could record from a maximum of eight to 10 cells at once, they replayed the video numerous times to record responses from a total of 177 cells. The specific cells from which they recorded are called X cells, which respond to slower motion than other cells in the lateral geniculate nucleus. In all there are some 120,000 X cells representing each eye in this region of the thalamus. Based on previous experiments by other researchers, Dan knew that each point in the cat's visual field should generate a spiking signal in 20-30 cells clustered together in the lateral geniculate nucleus. So, she pooled the on-off responses of between seven and 20 cells to reconstruct what the cat saw at each point in its field of view. The reconstructions of the scenes were fuzzy and low in contrast, but recognizable. "We have provided a first demonstration that spatiotemporal natural scenes can be reconstructed from the ensemble responses of visual neurons," the researchers concluded in their journal article. The research was supported by the National Institutes of Health and an Alfred P. Sloan Research Fellowship, a Beckman Young Investigator Award and a Hellman Faculty Award.

###