DFPGAN: Dual fusion path generative adversarial network for infrared and visible image fusion

2021 
Abstract Infrared and visible image fusion is an essential task for multi-sensor image fusion. Generative adversarial networks (GAN) have achieved remarkable performance in the fusion of infrared and visible image. Existing GAN based fusion methods merely using infrared and visible image as input for the fusion, while we found that differential images obtained by subtraction between two image sources could provide contrast information for the fusion. To this end, a novel dual fusion path generative adversarial network (DFPGAN) is proposed in this paper for infrared and visible image fusion. We divided the generator of generative adversarial network into two fusion paths namely infrared–visible path and differential path. The input of infrared–visible path concatenated two image sources to make infrared intensity and texture details keep balance fusion in this path. The input of differential path concatenated differential images obtained by subtraction between two image sources to make contrast information fusion in this path. The features extracted by two fusion paths are concatenated at the end of the generator to generate fused images with contrast effect and balanced information distribution. Meanwhile, we have implemented dual self-attention feature refine module (DSAM) on two fusion paths to refine feature maps in two fusion paths. We adopted switchable normalization layer (SN) substitute for batch normalization layer (BN) in the generator and discriminator to avoid fusion artifact. Furthermore, a mixed content loss is integrated in the generator loss functions to guide the generated image keep balanced information distribution and preserving contrast simultaneously. The adversarial training employed dual adversarial architecture to balance the distribution of infrared intensity and texture details. To verifying the improvement effect of fusion image on target detection, we introduce the Scaled-YOLOv4 target detection framework as evaluation framework, and use the proposed network to fuse RGB images and infrared images for target detection. The results of qualitative and quantitative experiments conducted on public datasets demonstrated the superiority of proposed network over other state-of-the-art methods and could generate fused images with distinctly contrast.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    41
    References
    0
    Citations
    NaN
    KQI
    []