A Deep Model for Multi-Focus Image Fusion Based on Gradients and Connected Regions

2020 
In this paper, we propose a novel unsupervised model for multi-focus image fusion based on gradients and connected regions, termed as GCF. To overcome the stumbling block of vanishing gradients in applying deep networks for multi-focus image fusion, we design the Mask-Net which can directly generate a binary mask. Thus, there is no need for hand-crafted feature extraction or fusion rules. Based on the fact that objects within the depth-of-field (DOF) have shaper appearance, i.e. , larger gradients, we use the gradient relation map obtained from source images to narrow the solution domain and speed up convergence. Then, the constraint of connected region numbers is conductive to finding the more accurate binary mask. With the consistency verification strategy, the final mask can be obtained by adapting the initial binary mask to generate the fused result. Therefore, the proposed method is an unsupervised model without the need of the ground-truth data. Both qualitative and quantitative experiments are conducted on the publicly available Lytro dataset. The results show that GCF can outperform the state-of-the-art in both visual perception and objective metrics.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    16
    Citations
    NaN
    KQI
    []