Deep Feature Correlation Learning for Multi-Modal Remote Sensing Image Registration

2022 
Deep descriptors have advantages over handcrafted descriptors on local image patch matching. However, due to the complex imaging mechanism of remote sensing images and the significant differences in appearance between multi-modal images, existing deep learning descriptors are unsuitable for multi-modal remote sensing image registration directly. To solve this problem, this article proposes a deep feature correlation learning network (Cnet) for multi-modal remote sensing image registration. First, Cnet builds a feature learning network based on the deep convolutional network with the attention learning module, to enhance feature representation by focusing on meaningful features. Second, this article designs a novel feature correlation loss function for Cnet optimization. It focuses on the relative feature correlation between matching and nonmatching samples, which can improve the stability of network training and decrease the risk of overfitting. In addition, the proposed feature correlation loss with a scale factor can further enhance network training and accelerate network convergence. Extensive experimental results on image patch matching (Brown, HPatches), cross-spectral image registration (VIS-NIR), multi-modal remote sensing image registration, and single-modal remote sensing image registration have demonstrated the effectiveness and robustness of the proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []