MSTNet: A Multilevel Spectral–Spatial Transformer Network for Hyperspectral Image Classification

2022 
Convolutional neural networks (CNNs) have been widely used in hyperspectral image classification (HSIC). Although the current CNN-based methods have achieved good performance, they still face a series of challenges. For example, the receptive field is limited, information is lost in down-sampling layer, and a lot of computing resources are consumed for deep networks. To overcome these problems, we proposed a multilevel spectral–spatial transformer network (MSTNet) for HSIC. The structure of MSTNet is an image-based classification framework, which is efficient and straightforward. Based on this framework, we designed a self-attentive encoder. First, HSIs are processed into sequences. Meanwhile, a learned positional embedding (PE) is added to integrate spatial information. Then, a pure transformer encoder (TE) is employed to learn feature representations. Finally, the multilevel features are processed by decoders to generate the classification results in the original image size. The experimental results based on three real hyperspectral datasets demonstrate the efficiency of the proposed method in comparison with the other related CNN-based methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    45
    References
    0
    Citations
    NaN
    KQI
    []