In trying to build a better neural network, I have begun studying neuroscience. I am focusing on how a real neuron actually works, to see if that can give me any new insights.

My first discovery is that studying neuroscience causes incredible headaches. You’re using neurons in your head to think about how you’re thinking using the neurons in your head. Just thinking about that kind of thinking makes my brain hurt!

That aside, other early observations include:

Not all neurons are equal; you can have 1 input and 1 output, multiple inputs and 1 output, no inputs and only outputs, and you can even have neurons with no outputs — different structures, as you may imagine, have different functions

You’re not always doing something; sometimes you’re not doing something — neurons can decide to not output anything

Inputs are analog while outputs are digital, which explains the use of sigmoids and derivatives on the input side

Repetitive outputs create memories — maybe this has Long-Short Term Memory (LSTM) applications

Outputs aren’t automatic; neurons build up certainty, in a sense, before releasing an output — neural networks similarly iterate to reduce margins of error between weighted outputs and training data

There are 3 distinct functions of neurons: sensory (gathering inputs), integration, and motor (doing something with the outputs); there’s no one neuron that does everything

Yes, I have oversimplified neuroscience. I am only interested in the field inasmuch as it pertains to coding better neural networks. Furthermore, there are some areas that I am still researching.

For example, while I don’t think I need a code equivalent for a myelin sheath (insulation), astrocytes might be important. The one-sentence synopsis is that neurons form synapses when astrocytes signal them to. That, among other things, seems worth exploring.