STiDi-BP: Spike Time Displacement based Error BackPropagation in multilayer spiking neural networks

2021 
Abstract Error backpropagation is the most common approach for direct training of spiking neural networks. However, the non-differentiability of spiking neurons makes the backpropagation of error a challenge. In this paper, we introduce a new temporal learning algorithm, STiDi-BP, in which we ignore backward recursive gradient computation, and to avoid the non-differentiability of SNNs, we use a linear approximation to compute the derivative of latency with respect to the potential. We apply gradient descent to each layer independently based on an estimation of the temporal error in that layer. To do so, we calculate the desired firing time of each neuron and compare it to its actual firing time. In STiDi-BP, we employ the time-to-first-spike temporal coding, one spike per neuron, and use spiking neuron models with piecewise linear postsynaptic potential which provide large computational benefits. To evaluate the performance of the proposed learning rule, we run three experiments on the XOR problem, the face/motorbike categories of the Caltech 101 dataset, and the MNIST dataset. Experimental results show that the STiDi-BP outperforms traditional BP in terms of accuracy and/or computational cost.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    9
    Citations
    NaN
    KQI
    []