Semantic Context-Aware Network for Multiscale Object Detection in Remote Sensing Images

2021 
Accurate object detection in remote sensing images is an essential part of automatic extraction, analysis, and understanding of image information, which potentially plays a significant role in a number of practical applications. However, the scale diversity in remote sensing images presents a substantial challenge for object detection, regarded as one of the crucial problems to be solved. To extract multiscale feature representations and sufficiently exploit semantic context information, this letter proposes a semantic context-aware network (SCANet) model for multiscale object detection. We propose two novel modules, called receptive field-enhancement module (RFEM) and semantic context fusion module (SCFM), to enhance the performance of SCANet. The RFEM dedicates to more robust multiscale feature extraction by paying attention to distinct receptive fields through multibranch different convolutions. For the purpose of utilizing the semantic context information contained in the scene to guide the network to better detection accuracy, the SCFM integrates the semantic context features from the upper level with the lower level features and delivers them hierarchically. Experiments demonstrate that, compared with the state-of-the-art approaches, the SCANet yields superior detection results on the DOTA-v1.5 data set.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []