Residual Spatial Attention Kernel Generation Network for Hyperspectral Image Classification With Small Sample Size

2022 
With the rapid development of deep learning, the convolutional neural networks (CNNs) have been widely used in hyperspectral image classification (HSIC) and achieved excellent performance. However, CNNs reuse the same kernel weights over different locations, thus resulting in the insufficient capability of capturing diversity spatial interactions. Moreover, CNNs usually require a large amount of training samples to optimize the learnable parameters. When training samples are limited, the classification performance of CNN tends to drop off a cliff. To tackle the aforementioned issues, a novel residual spatial attention kernel generation network (RSAKGN) is proposed for HSIC. First, a spatial attention kernel generation module (SAKGM) is built to extract discriminative semantic features, which can dynamically calculate the attention weights to generate specific spatial attention kernels over different locations. Then, we combine the SAKGM with residual learning framework by embedding the SAKGM into a bottleneck residual block to obtain the residual spatial attention block (RSAB). The RSAKGN is constructed by stacking several RSABs. Experimental results on three public HSI datasets demonstrate that the proposed RSAKGN method outperforms several state-of-the-arts with small sample size.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    50
    References
    0
    Citations
    NaN
    KQI
    []