Lightweight Self-Attention Residual Network for Hyperspectral Classification

2022 
Compared with traditional hyperspectral image classification methods, the classification model based on the deep convolutional neural network (DCNN) can achieve higher precision classification. However, the increase in classification accuracy has led to explosive growth in model complexity. In this letter, we proposed a more lightweight and efficient residual structure to alleviate this problem to replace the standard residual structure. This structure uses the “divide and conquer” idea to reduce the number of model parameters and calculations. In addition, the structure introduces a self-attention mechanism so that the input feature map and output feature map can be adaptively fused, and the feature extraction ability of the residual structure is further enhanced. The experimental results reveal that the residual structure we proposed can significantly reduce the complexity of the model and maintain a high classification accuracy, even surpassing the current mainstream classification model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []