A Spatial–Spectral Dual-Optimization Model-Driven Deep Network for Hyperspectral and Multispectral Image Fusion

2022 
Deep learning, especially convolutional neural networks (CNNs), has shown very promising results for multispectral (MS) and hyperspectral (HS) image fusion (MS/HS fusion) task. Most of the existing CNN methods are based on “black-box” models that are not specifically designed for MS/HS fusion, which largely ignore the priors evidently possessed by the observed HS and MS images and lack clear interpretability, leaving room for further improvement. In this article, we propose an interpretable network, named spatial–spectral dual-optimization model-driven deep network ( $\text{S}{^{\mathrm{2}}}$ DMDN), which embeds the intrinsic generation mechanism of the MS/HS fusion to the network. There are two key characteristics: 1) explicitly encode the spatial prior and spectral prior evidently possessed by the input MS and HS images in the network architecture and 2) unfold an iterative spatial–spectral dual-optimization algorithm into a model-driven deep network. The benefit is that the network has good interpretability and generalization capability, and the fused image is richer in semantics and more precise in spatial. Extensive experiments are conducted to prove the superiority of our proposed method over other state-of-the-art methods in terms of quantitative evaluation metrics and qualitative visual effects.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []