DCKN: Multi-focus image fusion via dynamic convolutional kernel network

2021 
Abstract In current multi-focus image fusion approaches with convolutional neural network (CNN), the same set of convolutional kernels is used to multi-focus images for feature extraction of all regions. However, the same kernels may not be optimal for all regions in multi-focus images, incurring artifacts in textureless and edge regions of the fused image. To address these problems, this paper proposes a dynamic convolutional kernel network (DCKN) for multi-focus image fusion, in which the convolutional kernels are dynamically generated from region context conditioned on input images. The kernels in the proposed architecture are not only position-varying but also sample-varying, which can adapt accurately to spatially variant blur caused by depth and texture variations in multi-focus images. Moreover, our DCKN works not only in supervised learning, but also in unsupervised learning. For supervised learning, the ground-truth fusion image is utilized to supervise the output fused image. For unsupervised learning, we introduce bright channel and total variation loss function to constraint the DCKN jointly. Bright channel metric can determine roughly whether source pixels are focused or not, which is utilized to guide the training process for the unsupervised network. Extensive experiments on popular multi-focus images show that our DCKN without any post-processing algorithms is comparable to state-of-the-art approaches, and our unsupervised model obtains high fusion quality.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    50
    References
    0
    Citations
    NaN
    KQI
    []