I've seen some work that attempts to recreate the "spiky" neural networks (e.g. neurons that fire when their inputs pass a threshold), intended to mimic the biochemistry of real neurons.
That work seems to spin their contribution as reducing the power required to evaluate the neural network though. If I recall correctly, the accuracy of those models for everyday tasks is typically much much lower than usual ANNs, and they're a pain to train. So, still not very common.
That is exactly what I made circa 2008. I used the izhekevich model for spiking. It was certainly faster on the GPU (2000x), but, yeah, getting the network to converge on anything was terrible. Debugging it was fun/awful though.
1:"Hey, do you see the first squiggle with the two fuzzes after."
2:"Next to Beaker's eyebrows?"
The low power work seems to have been aiming to be a rough filter, rather than a full system. Still fun to use.
That work seems to spin their contribution as reducing the power required to evaluate the neural network though. If I recall correctly, the accuracy of those models for everyday tasks is typically much much lower than usual ANNs, and they're a pain to train. So, still not very common.