A revolutionary new theory contradicts a fundamental assumption in neuroscience about how the brain learns. According to researchers at Bar-Ilan University in Israel led by Prof. Ido Kanter, the theory promises to transform our understanding of brain dysfunction and may lead to advanced, faster, deep-learning algorithms.

The brain is a highly complex network containing billions of neurons. Each of these neurons communicates simultaneously with thousands of others via their synapses. A neuron collects its many synaptic incoming signals through dendritic trees.

In 1949, Donald Hebb suggested that learning occurs in the brain by modifying the strength of synapses. Hebb’s theory has remained a deeply rooted assumption in neuroscience.

Synaptic vs. dendritic learning

Hebb was wrong, says Kanter. “A new type of experiments strongly indicates that a faster and enhanced learning process occurs in the neuronal dendrites, similarly to what is currently attributed to the synapse,” Kanter and his team suggest in an open-access paper in Nature’s Scientific Reports, published Mar. 23, 2018.

“In this new [faster] dendritic learning process, there are [only] a few adaptive parameters per neuron, in comparison to thousands of tiny and sensitive ones in the synaptic learning scenario,” says Kanter. “Does it make sense to measure the quality of air we breathe via many tiny, distant satellite sensors at the elevation of a skyscraper, or by using one or several sensors in close proximity to the nose,?” he asks. “Similarly, it is more efficient for the neuron to estimate its incoming signals close to its computational unit, the neuron.”

The researchers also found that weak synapses, which comprise the majority of our brain and were previously assumed to be insignificant, actually play an important role in the dynamics of our brain.

According to the researchers, the new learning theory may lead to advanced, faster, deep-learning algorithms and other artificial-intelligence-based applications, and also suggests that we need to reevaluate our current treatments for disordered brain functionality.

This research is supported in part by the TELEM grant of the Israel Council for Higher Education.

Abstract of Adaptive nodes enrich nonlinear cooperative learning beyond traditional adaptation by links

Physical models typically assume time-independent interactions, whereas neural networks and machine learning incorporate interactions that function as adjustable parameters. Here we demonstrate a new type of abundant cooperative nonlinear dynamics where learning is attributed solely to the nodes, instead of the network links which their number is significantly larger. The nodal, neuronal, fast adaptation follows its relative anisotropic (dendritic) input timings, as indicated experimentally, similarly to the slow learning mechanism currently attributed to the links, synapses. It represents a non-local learning rule, where effectively many incoming links to a node concurrently undergo the same adaptation. The network dynamics is now counterintuitively governed by the weak links, which previously were assumed to be insignificant. This cooperative nonlinear dynamic adaptation presents a self-controlled mechanism to prevent divergence or vanishing of the learning parameters, as opposed to learning by links, and also supports self-oscillations of the effective learning parameters. It hints on a hierarchical computational complexity of nodes, following their number of anisotropic inputs and opens new horizons for advanced deep learning algorithms and artificial intelligence based applications, as well as a new mechanism for enhanced and fast learning by neural networks.