Wide-Context Attention Network for Remote Sensing Image Retrieval

2020 
Remote sensing image retrieval (RSIR) has broad application prospects, but related challenges still exist. One of the most important challenges is how to obtain discriminative features. In recent years, although the powerful feature learning ability of convolutional neural networks (CNNs) has significantly improved RSIR, their performance can be restricted by the complexity of remote sensing (RS) images, such as small objects, varying scales, and wide scope. To address these problems, we propose a novel wide-context attention network (W-CAN). It leverages two attention modules to adaptively learn local features correlated in the spatial and channel dimensions, respectively, which can obtain discriminative features with extensive context information. During training, a hybrid loss is introduced to enhance the intraclass compactness and interclass separability of the features. Moreover, we add a branch to learn binary descriptors and realize the end-to-end descriptor aggregation. Experiments on four RS benchmark data sets demonstrate that the proposed method can outperform some state-of-the-art RSIR methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []