A visual saliency based sample augmentation method for image patch similarity comparison

2019 
In this paper, we propose a sample augmentation method for Convolutional Neural Networks (CNN) based image patch similarity comparison methods, which have presented obvious superiority compared with the conventional methods. In our method, we augment the learning samples by pre-processing them with their visual saliency maps. By using the saliency enhancement samples rather than the raw samples, the trained CNN models are more robust to the background changes and disorderly minor interferes, and therefore achieve higher comparison accuracy. In the experiments, we test our method by combining it with the 2-channel CNN model, and then compare with the primary 2-channel CNN, Siamese and Pseudo-Siamese CNN models. The experimental results demonstrate the superiority of the proposed method: it makes the learning networks obtain higher comparison accuracy and have stronger stability at the same time.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    0
    Citations
    NaN
    KQI
    []