MMF: A Multi-scale MobileNet based Fusion Method for Infrared and Visible Image

2021 
Abstract To improve the quality and real-time performance of the image fusion for target recognition and tracking, a multi-scale MobileNet based fusion (MMF) method for the infrared and visible image is proposed. We adopt an end-to-end convolutional neural network (CNN) composed of only three layers to fuse the source images. The first layer maps the input images to a high dimensional feature space, the second layer extracts the high dimensional features of the input images by the multi-scale MobileNet block (MMB), and the third layer combines the high dimensional features to generate the fused image. To enhance the saliency recognition and detail preservation ability of the fusion network, anisotropic diffusion (AD) filter is introduced to the loss function. Experimental results show that our fusion method achieves state-of-art performance in qualitative and quantitative evaluation and is 1-2 orders of magnitude faster than the representative image fusion methods based on CNN.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    0
    Citations
    NaN
    KQI
    []