Hyperspectral Imagery Spatial Super-Resolution Using Generative Adversarial Network

2021 
Hyperspectral imagery contains both spatial structure information and abundant spectral features of imaged objects. However, due to sensor limitations, abundant spectral information always comes at the sacrifice of low spatial resolution, which brings about difficulties with object analysis and identification. The super-resolution (SR) of HSIs, restored by the traditional interpolation algorithms or the network models trained with the mean-square-error-based loss function, tends to produce over-smoothed images. In this paper, we propose a novel Hyperspectral imagery Spatial Super-Resolution algorithm based on a Generative Adversarial Network (HSSRGAN). The generator network in HSSRGAN consists of two interacting part, i.e., a spatial feature enhanced network (SFEN) and a spectral refined network (SRN), while the discriminator network is employed to predict the probability that the authentic HR image is comparatively more similar than the forged generated image. Concretely, SFEN with the special dense residual blocks is designed to fully extract and enhance more deep hierarchical spatial features of hyperspectral imagery, while SRN is constructed to capture spectral interrelationships and refine the spatial context information so as to increase spatial resolution and alleviate spectral distortion. Moreover, SFEN and SRN are trained by the least-absolute-deviation based loss function to investigate spatial context and the spectral-angle-mapper based loss function to refine spectral information. We validate two versions of our proposed algorithm, 3D-HSSRGAN and 2D-HSSRGAN, on Pavia Centre dataset and Cuprite dataset. Experimental results show that the presented approach is superior to several existing state-of-the-art works.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    48
    References
    3
    Citations
    NaN
    KQI
    []