Hyperspectral Image Super-Resolution Based on Multi-Scale Mixed Attention Network Fusion

2021 
Hyperspectral images contain rich spectral information and have great application value. However, due to various hardware limitations, the spatial resolution of hyperspectral images acquired by the sensor is low. Hyperspectral image super-resolution attracts much attention to improve spatial quality. In this letter, a single hyperspectral image super-resolution method based on network fusion is proposed. Our method includes super-resolution network part and fusion part. In super-resolution network part, we construct 3D multi-scale mixed attention networks (3D-MSMAN) by cascading 3D multi-scale mixed attention block (3D-MSMAB) to restore high-resolution hyperspectral images. 3D-MSMAB consists of the 3D Res2net module and the mixed attention module. 3D Res2net module is a simple and effective multi-scale method. The mixed attention module is proposed through combining the first-order and second-order statistics of features. In addition, we use the mutual learning loss between 3D-MSMAN so that they can learn from each other. In fusion part, the fusion module is designed to merge the output of each 3D-MSMAN. Our method can achieve good results in both simulated and real super-resolution experiments. Code is available at https://github.com/LYT-max/Mixed-Attention-for-HSI-SR.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    3
    Citations
    NaN
    KQI
    []