P3Net: Pansharpening via Pyramidal Detail Injection With Deep Physical Constraints

2022 
Pansharpening is an image fusion process aiming to generate high-resolution multispectral (HRMS) images from a pair of low-resolution multispectral (LRMS) images and a high-resolution PAN image. It is a fundamental and significant task for the widespread use of remote sensing images. This article proposes a new residual-learning-based multispectral pansharpening network constrained by two deep physical models, collectively termed as P3Net. It mainly consists of the mainstream PDFNet and the other two auxiliary physical models, M2PNet and H2LNet. Unlike the existing methods of processing only one image scale, the proposed PDFNet fully extracts the spatial details from the multilevel image pyramid with decreasing spatial scales. Then, the spatial information is injected into the upsampled LRMS image. Since the pansharpened result should be consistent with the observed inputs under the physics models, we learn deep pansharpening physics models to reflect the inverse relationships. In detail, we propose the lightweight M2PNet and H2LNet to represent the latent nonlinear mappings from the HRMS image to the panchromatic (PAN) image and the LRMS image, respectively. The two pretrained physics models are frozen and guide the training of the PDFNet, so as to drive clear physical interpretability and further suppress the spectral and spatial distortions. The comparative experiments with the existing state-of-the-art pansharpening methods on the QuickBird, GaoFen, and WorldView test sets demonstrate the superiority of the proposed method in terms of both quantitative metrics and subjective visual effects. The codes are available at https://github.com/KSJhon/PyramidPanWithPhysics .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    68
    References
    1
    Citations
    NaN
    KQI
    []