A neuron’s perspective: A neuron receives input from an external stimulus via glutamate, providing current information about the external world (layer one). The neuron uses prior information (layer two) to predict the conductance of layer one. The difference between layer two’s expected output of layer one and the actual output of layer one is the membrane voltage, which signals prediction error, and helps the neuron learn about the world. Image credit: Christopher D. Fiorillo.

(PhysOrg.com) -- If you want to understand and predict the behavior of your young daughter, explains neurobiologist Christopher Fiorillo, you might observe how she reacts to various environmental factors. Then, using a statistical analysis, you might try to determine a relationship between her behavior and these external factors. However, an easier and quicker way might be simply to try to understand what the child herself knows about her world. Although young children have similar basic goals, they behave differently from one another because they have different information about the world.

This idea, known to psychologists as the theory of mind, is the basis for a new theory of brain function proposed by Fiorillo, a researcher at Stanford University. His model attempts to provide an understanding of the nervous system by looking at the world from a neuron’s perspective. This “first-person” approach differs from the conventional “third-person” approach to understanding the nervous system, which is based on observing inputs and outputs and trying to figure out the relationship between the two.

“The problem [with the conventional approach] is that the relationship between inputs and outputs is very complicated, even for a single neuron,” Fiorillo told PhysOrg.com. “By contrast, I have tried to figure out what a neuron knows about the world. This is possible because we already know a great deal about the biophysical properties of neurons. I think that if we can figure out what information a neuron has, then we will be able to make better sense of its inputs and outputs. I think that this approach to information will prove to be very useful, regardless of the success of the rest of the theory.”

Learning neurons

Fiorillo’s theory attempts to explain the computational function of the nervous system. Although much progress has been made in understanding the mechanics of the nervous system, there is still no general theory of its computational function. In other words, scientists don’t understand how a system made up of simple, tiny neurons can compute information as a complex, intelligent, and holistic system. In the absence of a computational theory, scientists and engineers have been limited in their ability to design artificial systems that mimic the intelligence of biological systems.

In Fiorillo’s model, each of the billions of neurons in the nervous system shares the same basic computational function. Also, a neuron’s function mirrors the function of the system as a whole. After all, Fiorillo explains, the entire system originally developed from a single cell. However, even though neurons may use the same general computation method, they still have differences, since the information that a neuron has is the result of the particular statistical pattern of inputs to which it has been exposed. Since different neurons develop in different environments, each neuron acquires its own unique set of information.

In the model, the nervous system is united by a common computational “goal” of promoting the future of an individual’s genetic information by selecting the most advantageous behaviors, or outputs. To do this, a neuron compares an external stimulus to its own internal prior information sources. The neuron produces an output signal when the stimulus intensity exceeds its expectation (i.e. when there is a difference between what the neuron expects and what it actually senses). This output signal is called a “prediction error” and it is used to teach the neuron and its target neurons what’s new about the world. By maximizing its prediction errors, the neuron learns things that can ultimately help it achieve its biological goals.

Neuron-to-neuron signaling

The physical way this works, Fiorillo explains, is that a neuron’s membrane voltage depends on the flow of current though its ion channels. A neuron has multiple groups of ion channels, with each group attending to information from a particular region of space or period of the past. Some groups of ion channels focus on providing current “sensory” information (such as glutamate-regulated ion channels at synapses), whereas others focus on prior information (such as voltage-regulated potassium channels). The difference between the neuron’s current and prior information determines its membrane voltage, which is the output signal that the neuron sends to the next neuron, and so on down the chain of neurons.

Because the stimulus of each neuron is selected under the influence of reward feedback, the further a neuron is from the system’s sensory input, the more informative its stimulus is about the abstract notion of reward and the less informative its stimulus is about the concrete sensory world. As the “last” neuron in the circuit, a motor neuron has the most information and the least uncertainty about reward, so it’s appropriate that the motor neuron determines the system’s output. As Fiorillo explained, the anatomy of the nervous system supports this proposal. For example, scientists know that taste is usually a better predictor of future reward than light intensity. Appropriately, there are fewer neurons lying between the gustatory cells in the tongue and motor neurons, than there are between photoreceptor cells in the eye and motor neurons.

“A great deal of past work has focused on how neurons form synaptic connections with one another,” Fiorillo said. “However, a neuron has many inputs that are not synaptic but are instead mediated by non-synaptic ion channels. The computational function of these channels has not been well understood, and these channels are often completely absent in the neurons of artificial neural networks.

“I propose that the function of these ion channels in the temporal domain is analogous to the function of synapses in the spatial domain. A synapse, like the neuron from which it originates, is dedicated to a particular region of space. A neuron has many potential synapses, and by choosing its synapses it can choose which regions of space are the most interesting. Similarly, a neuron has many different non-synaptic ion channels (particularly potassium channels) encoded in its genome, and it chooses to express only a small number. These channels are known to differ from one another in their kinetic properties (how rapidly they change in response to changes in voltage). Thus, different channels remember different periods of the past. What I propose is that a neuron selects which of these channels to express in fundamentally the same way that it selects its synaptic inputs. I propose that a neuron selects those channels, or those memories of the past, that are the best predictors of its current synaptic input. These channels would therefore allow a neuron to make predictions through time.”

Neurological applications

Although the model is simple and provides an elegant explanation for the complexity of the nervous system, there are still questions regarding its accuracy. For instance, the model could hold true for some neurons but not others, and it may only account for a portion of neuronal variation. It may be possible to test the model by determining how well it predicts the synaptic connectivity of neurons that have developed in an environment with a known statistical structure. But because of our general ignorance of the statistical structure of the world, it may be difficult to easily confirm or reject the model overall.

Whether or not the model is accurate for biological systems, however, it might still prove useful as a computational framework for designing artificial neural networks. Because the system learns for itself, without the need for “built-in” information, it could lead to intelligent systems that can learn from their environments.

“Perhaps the most exciting aspect of understanding the computational function of the nervous system would be that, at least in principle, it would allow us to build an artificial system that exhibits the same sort of intelligence as biological systems,” Fiorillo said. “However, it is important to recognize that even small nervous systems contain an enormous amount of information, and building a comparable artificial system would not be easy even if we understood all the computational principles. I think that it would be best to start with relatively small systems. I think the nervous system of an insect has far more information and is far more intelligent than any artificial system that we have today.”

He added that a computational understanding of the nervous system could also be helpful in treating disorders of the nervous system.

“Many neurological and psychiatric conditions are thought to result from inappropriate connectivity between neurons,” he said. “If this computational theory is correct, then it could allow us to determine what a neuron's connectivity should be in order for the system to function properly. It also suggests how we might be able to change a neuron's connectivity by altering the activity of its inputs and outputs.”

More information: Fiorillo, Christopher D. “Towards a General Theory of Neural Computation Based on Prediction by Single Neurons.” PLoS ONE. October 2008, Volume 3, Issue 10, e3298.

Copyright 2008 PhysOrg.com.

All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.

Explore further Model shows that the speed neurons fire impacts their ability to synchronize