Single-Image Super-Resolution for Remote Sensing Images Using a Deep Generative Adversarial Network With Local and Global Attention Mechanisms

2021 
Super-resolution (SR) technology is an important way to improve spatial resolution under the condition of sensor hardware limitations. With the development of deep learning (DL), some DL-based SR models have achieved state-of-the-art performance, especially the convolutional neural network (CNN). However, considering that remote sensing images usually contain a variety of ground scenes and objects with different scales, orientations, and spectral characteristics, previous works usually treat important and unnecessary features equally or only apply different weights in the local receptive field, which ignores long-range dependencies; it is still a challenging task to exploit features on different levels and reconstruct images with realistic details. To address these problems, an attention-based generative adversarial network (SRAGAN) is proposed in this article, which applies both local and global attention mechanisms. Specifically, we apply local attention in the SR model to focus on structural components of the earth's surface that require more attention, and global attention is used to capture long-range interdependencies in the channel and spatial dimensions to further refine details. To optimize the adversarial learning process, we also use local and global attentions in the discriminator model to enhance the discriminative ability and apply the gradient penalty in the form of hinge loss and loss function that combines L1 pixel loss, L1 perceptual loss, and relativistic adversarial loss to promote rich details. The experiments show that SRAGAN can achieve performance improvements and reconstruct better details compared with current state-of-the-art SR methods. A series of ablation investigations and model analyses validate the efficiency and effectiveness of our method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []