So they exist: but why does this local spike change the way we think about the brain as a computer? Because the dendrites of a pyramidal neuron contain many separate branches. And each can sum-up-and-spit-out-a-spike. Which means that each branch of a dendrite acts like a little nonlinear output device, summing up and outputting a local spike if that branch gets enough inputs at roughly the same time:

Wait. Wasn’t that our model of a neuron? Yes it was. Now if we replace each little branch of dendrite with one of our little “neuron” devices, then a pyramidal neuron looks something like this:

Yes, each pyramidal neuron is a two layer neural network. All by itself.

Beautiful work by Poirazi and Mel back in 2003 showed this explicitly. They built a complex computer model of a single neuron, simulating each little bit of dendrite, the local spikes within them, and how they sweep down to the body. They then directly compared the output of the neuron to the output of a two-layer neural network: and they were the same.

The extraordinary implication of these local spikes is that each neuron is a computer. By itself the neuron can compute a huge range of so-called nonlinear functions. Functions that a neuron which just sums-up-and-spits-out-a-spike cannot ever compute. For example, with four inputs (Blue, Sea, Yellow, and Sun) and two branches acting as little non-linear devices, we can set up a pyramidal neuron to compute the “feature-binding” function: we can ask it to respond to Blue and Sea together, or respond to Yellow and Sun together, but not to respond otherwise — not even to Blue and Sun together or Yellow and Sea together. Of course, neurons receive many more than four inputs, and have many more than two branches: so the range of logical functions they could compute is astronomical.

More recently, Romain Caze and friends (I am one of those friends) have shown that a single neuron can compute an amazing range of functions even if it cannot make a local, dendritic spike. Because dendrites are naturally not linear: in their normal state they actually sum up inputs to total less than the individual values. They are sub-linear. For them 2+2 = 3.5. And having many dendritic branches with sub-linear summation also lets the neuron act as two-layer neural network. A two-layer neural network that can compute a different set of non-linear functions to those computed by neurons with supra-linear dendrites. And pretty much every neuron in the brain has dendrites. So almost all neurons could, in principle, be a two-layer neural network.