Hyperspectral and Multispectral Image Fusion Via Self-Supervised Loss and Separable Loss

2022 
Fusion of hyperspectral images (HSIs) with low-spatial and high-spectral resolution and multispectral images (MSIs) with high-spatial and low-spectral resolution is an important method to improve spatial resolution. The existing deep-learning-based image fusion technologies usually neglect the ability of neural networks to understand differential features. In addition, the loss constraints do not stem from the physical characteristics of the hyperspectral (HS) imaging sensors. We propose the self-supervised loss and the spatially and spectrally separable loss, respectively: 1) the self-supervised loss: different from the previous way of directly stacking the upsampled HSIs and MSIs as input, we expect the potentially processed HSIs to ensure not only the integrity of HSI information but also the most reasonable balance between overall spatial and spectral features. First, the preinterpolated HSIs are decomposed into subspaces as self-supervised labels. Then, a network is designed to learn subspace information and obtain the most discriminative features and 2) the separable loss: according to the physical characteristics of HSIs, the pixel-based mean square error loss is first divided into the domain loss and spectral domain loss, and then the similarity score of the images is calculated and used to construct the weighting coefficients of the two domain losses. Finally, the separable loss is jointly expressed by the weights. Experiments on public benchmark datasets indicate that the self-supervised loss and separable loss can improve the fusion performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    53
    References
    0
    Citations
    NaN
    KQI
    []