A solution to the learning dilemma for recurrent networks of spiking neurons

2019 
Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. But in spite of extensive research, it has remained open how learning through synaptic plasticity could be organized in such networks. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A new mathematical insight tells us how they need to be combined to enable network learning through gradient descent. The resulting learning method -- called e-prop -- approaches the performance of BPTT (backpropagation through time), the best known method for training recurrent neural networks in machine learning. But in contrast to BPTT, e-prop is biologically plausible. In addition, it elucidates how brain-inspired new computer chips -- that are drastically more energy efficient -- can be enabled to learn.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    68
    References
    25
    Citations
    NaN
    KQI
    []