Exposure fusion via sparse representation and shiftable complex directional pyramid transform

2017 
Sparse code theory with the sliding window technique can be used for the efficient fusion of multi-exposure images. However, when the size of the source images is large, this process requires a significant amount of time. To solve this problem, we propose a method that uses low-frequency sub-images of the source images as the input to the sparse code fusion framework. These low-frequency sub-images (which are far smaller than the entire image) provide a coarse representation of the original image. Regarding multi-scale decomposition, the high redundancy ratio of some methods limits their applicability to image fusion, especially multi-exposure image fusion (usually more than two source images). In this paper, we propose a method that employs a novel shiftable complex directional pyramid with shift-invariance and a low redundancy ratio to obtain the low- and high-frequency sub-images. For the high-frequency sub-image, we introduce a novel fusion rule based on the entropy of the segmented block, allowing more details of the source images to be preserved. Experiments show that our method attains results that are comparable to or better than existing methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    32
    References
    5
    Citations
    NaN
    KQI
    []