Physics-Informed Hyperspectral Remote Sensing Image Synthesis With Deep Conditional Generative Adversarial Networks

2022 
High-resolution hyperspectral remote sensing images are of great significance to agricultural, urban, and military applications. However, collecting and labeling hyperspectral images are time-consuming, expensive, and usually heavily rely on domain knowledge. In this article, we propose a new method for generating high-resolution hyperspectral images and subpixel ground-truth annotations from RGB images. Given a single high-resolution RGB image as its conditional input, unlike previous methods that directly predict spectral reflectance and ignores the physics behind it, we consider both imaging mechanism and spectral mixing, introduce a deep generative network that first recovers the spectral abundance for each pixel, and then generate the final spectral data cube with the standard USGS spectral library. In this way, our method not only synthesizes high-quality spectral data existing in the real world but also generates subpixel-level spectral abundance with well-defined spectral reflectance characteristics. We also introduce a spatial discriminative network and a spectral discriminative network to improve the fidelity of the synthetic output from both spatial and spectral perspectives. The whole framework can be trained end-to-end in an adversarial training paradigm. We refer to our method as “Physics-informed Deep Adversarial Spectral Synthesis (PDASS).” On the IEEE grss_dfc_2018 dataset, our method achieves an MPSNR of 47.56 on spectral reconstruction accuracy and outperforms other state-of-the-art methods. As latent variables, the generated spectral abundance and the atmospheric absorption coefficients of sunlight also suggest the effectiveness of our method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    78
    References
    0
    Citations
    NaN
    KQI
    []