Unsupervised Multi-layer Spiking Convolutional Neural Network Using Layer-Wise Sparse Coding

2020 
Deep learning architecture has shown remarkable performance in machine learning and AI applications. However, training a spiking Deep Convolutional Neural Network (DCNN) while incorporating traditional CNN properties remains an open problem for researchers. This paper explores a novel spiking DCNN consisting of a convolutional/pooling layer followed by a fully connected SNN trained in a greedy layer-wise manner. The feature extraction of images is done by the spiking DCNN component of the proposed architecture. And in achieving the feature extraction, we leveraged on the SAILnet to train the original MNIST data. To serve as input to the convolution layer, we process the raw MNIST data with bilateral filter to get the filtered image. The convolution kernel trained in the previous step is used to calculate the filtered image’s feature map, and carry out the maximum pooling operation on the characteristic map. We use BP-STDP to train the fully connected SNN for prediction. To avoid over fitting and to further improve the convergence speed of the network, a dynamic dropout is added when the accuracy of the training sets reaches 97% to prevent co-adaptation of neurons. In addition, the learning rate is automatically adjusted in training, which ensures an effective way to speed up training and slow down the rising speed of the training accuracy at each epoch. Our model is evaluated on the MNIST digit and Cactus3 shape datasets, with the recognition performance on test datasets being 96.16% and 97.92% respectively. The level of performance shows that our model is capable of extracting independent and prominent features in images using spikes.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    1
    Citations
    NaN
    KQI
    []