The Assumption of Synaptic Learning, 1949–2017 ARTICLE SECTIONS Jump To The brain is a complex unidirectional network containing billions of neurons, where each of these neurons communicates simultaneously with thousands of others via their synapses. However, the neuron actually collects its many incoming synaptic signals solely through several extremely long ramified terminals, called dendritic trees ( Figure 1 ). In 1949, Donald Hebb’s pioneering work (1) suggested that learning occurs in the brain through modification of synaptic strengths: synaptic plasticity. Specifically, a synaptic strength is modified following local learning rules, the relative timing of activity of its pre- and post- synaptic neurons. (2) Synaptic plasticity is one of the major foci of research activity in neuroscience over the past decades and it has remained the common assumption for learning until very recently. (3) It also serves as the basis of all advanced machine learning and deep learning achievements. Figure 1 Figure 1. Scheme of a soma with two dendritic trees. Left: Synaptic strengths (red valves) take two extreme values, on/off, and their relative distance from the soma is exemplified (left scales). Right: Dendritic strengths (two green valves) which adaptively adjust to a spectrum of values.

The Paradox of Synaptic Learning ARTICLE SECTIONS Jump To The concept of synaptic learning is almost 70 years old but still suffers from severe conceptual and experimental difficulties, such as: (i) The experimental evidence that has been gathered so far to support synaptic plasticity in the cortex is limited, and includes huge fluctuations and inconsistencies. In addition, its time scale ranges between seconds, hours, days, and probably even more. A rule is hardly, if ever, to be found. (ii) The common mechanism for synaptic plasticity is the backpropagation signal from the spiking neuron. Hence, synaptic plasticity occurs far away from the soma ( Figure 1 ). This distance can exceed 100 times the size of the soma, and backpropagation of the “learning instruction” has to flow along long “pipes” with a variety of conducting features. Many accurate intra- and intercellular mechanisms have already been revealed in biology and neuroscience, but unexpectedly the influential mechanism for learning is inaccurate. (iii) Synaptic learning drives synaptic strengths to extreme unrealistic limits, either above-threshold or vanishing synapses. (3) Prevention of such catastrophe requires the introduction of an ad hoc sensitive balanced mechanism, similar to the idea of excitation–inhibition balanced mechanism, which recently was shown experimentally and theoretically to be wrong. (iv) Synaptic plasticity was introduced by Donald Hebb to describe learning. However, a quantitative interplay between the current form of synaptic plasticity and learning has become weaker over the last several decades. (4)

Conceptual Change of Dendritic Learning in Proximity to the Neuron ARTICLE SECTIONS Jump To Copernicus was the first to articulate loudly that the earth revolves around the sun and not vice versa, even though all the accumulated astronomical evidence at that time fit the old postulation. However, the simplest trajectory of the sun and planets became more and more complicated. “I guess something is wrong”, Copernicus said to himself as he swam against conventional wisdom to seek an alternative explanation for this phenomenon. “Hebb’s theory that has been so deeply rooted in the scientific world for 70 years seems to be wrong”, I have told my students. Using new types of experiments on neuronal cultures, (5) we have recently shown that learning is actually done by several dendrites, similar to the slow learning mechanism currently attributed to the synapses. Synaptic blockers were added to neuronal cultures such that synaptic connectivity is excluded and a patched neuron was extracellularly stimulated from several sites using a multielectrode array. Results indicate that a learning scheme occurs in a few dendrites that are in much closer proximity to the soma ( Figure 1 ).

Learning Is Enhanced and Occurs Faster than We Thought ARTICLE SECTIONS Jump To The learning process is based on a training set of pairs of stimulations, above-threshold intracellular stimulation followed (or preceded) by an extracellular stimulation which does not result in evoked spikes. The main challenge of the experiment was to maintain a stable neuron and to control with high precision the relative timings of these stimulations. The adaptation emerges a few minutes after the termination of the training procedure and was found to be stable and persistent over much longer periods. This learning is much faster compared to a similar synaptic adaptation which occurs on a time scale of dozens of minutes. In addition, the amplitude of the adaptation was found to be enhanced in comparison to synaptic adaptation.

Weak Effective Synapses Play a Key Role in Oscillating Adaptation ARTICLE SECTIONS Jump To On the theoretical level, dendritic adaptation represents a new type of abundant cooperative nonlinear dynamics, since effectively all incoming synapses to an updated dendrite concurrently undergo the same adaptation. Note that the effective stimulation to a neuron is a combination of the fixed synaptic strengths and time-dependent dendritic strengths, which are connected in series. Simulations of small networks with such cooperative dynamics reveal a new phenomenon—an oscillatory behaviour of the dendritic strengths. These self-oscillations present a self-controlled mechanism to prevent divergence or vanishing of the learning parameters, as opposed to the unrealistic strengths in synaptic learning. The oscillatory network dynamics is now counterintuitively governed by the effective weak synapses, which previously were assumed to be insignificant, even though they comprise the majority of our brain.

A New Language Has to Be Created for Learning ARTICLE SECTIONS Jump To A phase separation between the training procedure and its functioning as an associative memory with generalization capabilities is a common assumption in the utilization of neural networks. However, the cooperative dynamics, including fast and slow oscillations induced by dendritic adaptation, significantly violates this assumption, where effective strong and weak synapses can fastly flip their role. In addition, the order, speed, and precise timings of the presented examples to the network can significantly change the functioning of the network ( Figure 2 ). Hence, dendritic learning demands new quantitative definitions to notions like learning and embedding of information. Moreover, the existing theoretical methods to calculate bounds and asymptotic learning curves, for instance, have to be rebuilt to include the dynamical formalism of dendritic learning. It is also expected that this paradigm shift will open new horizons for advanced deep learning algorithms and artificial intelligence based applications. Figure 2 Figure 2. Scheme of different ordering and timing of the inputs, resulting in a different response.

Future Directions: Chemistry and Biology ARTICLE SECTIONS Jump To The philosophy that drives our research is quite unusual, since we aim to discover experimentally new phenomena and subsequently their implications on a network level, but not necessarily the underlying mechanism. Nevertheless, the investigation and the understanding of the underlying mechanism by biologists, chemists, and pharmacists is highly significant. It is important to demonstrate first the generality of dendritic learning in various cell types and to investigate its efficiency and available learning time scales in more realistic scenarios, and with a variety of stimulations. In addition, one can currently conclude that dendritic adaptation exists; however, the existence of synaptic adaptation operating in series is in question. Advanced experiments are required to verify whether synaptic adaptation functions in series to dendritic adaptation. Moreover, the new learning scenario occurs in different sites, dendrites, and in closer proximity to the neuron. The maintenance of learning in these new dendritic sites and their recovery and improvement in case of damage or sickness, such as disordered brain functionality, calls for immediate attention. Investigation of the dendritic learning is doable with the technology and equipment in existence since the 1990s. However, it requires a paradigm shift by leaving Hebb’s theory that has been so deeply rooted in the scientific community. The main new experimental ingredient to be added is the extended stimulation of a stable neuron from several directions with controlled timings and strengths.

This research was supported by the TELEM grant of the Council for Higher Education of Israel. The authors declare no competing financial interest. References ARTICLE SECTIONS Jump To This article references 5 other publications. 1 Hebb, D. ( 1949 ) The Organization of Behavior: A Neuropsychological Theory, Wiley , New York . 2 Dan, Y. and Poo, M.-M. ( 2006 ) Spike timing-dependent plasticity: from synapse to perception . Physiol. Rev. 86 ( 3 ), 1033 – 1048 , DOI: 10.1152/physrev.00030.2005 3 Sardi, S. , Vardi, R. , Goldental, A. , Sheinin, A. , Uzan, H. , and Kanter, I. ( 2018 ) Adaptive nodes enrich nonlinear cooperative learning beyond traditional adaptation by links . Sci. Rep. 8 ( 1 ), 5100 , DOI: 10.1038/s41598-018-23471-7 4 LeCun, Y. , Bengio, Y. , and Hinton, G. ( 2015 ) Deep learning . Nature 521 ( 7553 ), 436 , DOI: 10.1038/nature14539 5 Sardi, S. , Vardi, R. , Sheinin, A. , Goldental, A. , and Kanter, I. ( 2017 ) New Types of Experiments Reveal that a Neuron Functions as Multiple Independent Threshold Units . Sci. Rep. 7 ( 1 ), 18036 , DOI: 10.1038/s41598-017-18363-1

Cited By