Multi-Relation Attention Network for Image Patch Matching

2021 
Deep convolutional neural networks attract increasing attention in image patch matching. However, most of them rely on a single similarity learning model, such as feature distance and the correlation of concatenated features. Their performances will degenerate due to the complex relation between matching patches caused by various imagery changes. To tackle this challenge, we propose a multi-relation attention learning network (MRAN) for image patch matching. Specifically, we propose to fuse multiple feature relations (MR) for matching, which can benefit from the complementary advantages between different feature relations and achieve significant improvements on matching tasks. Furthermore, we propose a relation attention learning module to learn the fused relation adaptively. With this module, meaningful feature relations are emphasized and the others are suppressed. Extensive experiments show that our MRAN achieves best matching performances, and has good generalization on multi-modal image patch matching, multi-modal remote sensing image patch matching and image retrieval tasks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    57
    References
    3
    Citations
    NaN
    KQI
    []