“The brain works the way it does because it’s made of meat, and meat is not deterministic.” — Anthony Movshon

At first glance, the brain is a mess. More like a tangled ball of yarn than a finely woven tapestry, every combination of neuron-to-neuron is in there, somewhere. Yet look a little closer and this complex structure devolves into very clear regularity. I could take you on a tour of the waves of Purkinje cells, straight-backed like military men, reaching their arms out to passing fibers shooting up from a distant province. I could show you the shapes of the hippocampus where memories are created, messages washing down step by step. I could show you the round columns of barrel cortex, clear to your eye, that precisely mirrors the pattern of whiskers that eventually stimulate them. There is so much visible structure in here that we’re still attempting to unlock.

Hippocampus by Jason Snyder

Until recently, studying the brain was like studying individual water molecules by throwing a rock into a pond. We didn’t really understand that the brain caused behavior until the 1600s when Thomas Willis coined the term neurology. We didn’t know neurons were individual units that talked to each other until a century ago. We didn’t know how neurons spiked until half a century ago, nor what they were doing.

We’re only starting to glimpse what the brain is capable of, how it sends messages without sending noise, how it takes the overwhelming set of sensations and pairs it down into manageable feelings and desires. We’re still grasping at straws here, so many, many straws, and then throwing those straws at physicists and theoreticians and telling them to explain that. And they’ll come up with this explanation or that explanation but really we shrug our shoulders and don’t know what to do with what they say.

Now people are telling us that we’ll never have a theory of the brain like physicists have. That biology isn’t elegant like physics is. You’re going to look at this picture, this hippocampus, precisely structured to create memories, the same in all mammals, and tell me, “it’s too complicated! no one could understand this!”? It’s not elegant and beautiful and clearly structured?

It’s not like there are not clear explanations already. In fact, we know exactly how neurons spike to signal to their partners that something important happened. In 1952, Alan Hodgkin and Andrew Huxley developed a series of equations that not only describe how the membrane voltage of a neuron changes to generate action potentials, but the equations also predicted the existence and kinetics of ion channels that cause those action potentials. By combining these equations with an equation used to describe how electrons move through cables, neuroscientists can simulate an entire neuron. This is exactly how the Blue Brain Project is attempting to simulate large networks of neurons in concordance with physical reality.

So we know exactly how neurons behave! At the end of the day, though, it can only tell us so much. It cannot tell us how networks of neurons move in their intricate dance to generate the right pulses and combinations that cause you to cough, or to turn your head, or to smile and say “hello”. It is like knowing the behavior of a particle but asking why the planets turn the way that they do or why when I push coffee off my table it splatters on the floor.

We have some other equations that represent what the brain does (see above). Some are physical realities, others are algorithms the brain is appears to be approximating. I have gone into what I think of them elsewhere, but two of them are especially important.

First is the general principle that sensory neurons — that first layer of neurons that learns about the world — maximize their information about the world. If you look at Shannon’s information theory, the equations that he deduced to describe signals transmitted down a noisy cable, it turns out that they also have an impressive explanatory power for sensory neurons. This is what these neurons do! They want to grab as much information about the world as possible and bring it into the nervous system to be used by the rest of the brain. But how they can maximize their information is contingent on the environment that the animal is in. The statistics of the natural world vary between cave dwellers, and bats, and people hiking through the bright desert sun, and the nervous system has to take that into account.

Second is the idea that we learn the value of an event or an object by continually predicting how much we will like it and then responding not to the actual value — but to how different that ends up being from our expectations. This is a theory that was proposed from observations of behavior and ended up having a very clear neural analog! There are neurons that release dopamine, but only when something unexpected happens. And this dopamine is a signal that tells other parts of the nervous system: the world is different than we thought; we need to change how we are behaving. And again, I can write down these equations for you — equations that are similar to those that programmers use to build modern, machine learning algorithms — and they will tell you how one part of the nervous system learns.