Rotation-Invariant Feature Learning for Object Detection in VHR Optical Remote Sensing Images by Double-Net

2019 
Rotation-invariant feature extraction is crucial for object detection in Very High Resolution (VHR) optical remote sensing images. Although convolutional neural networks (CNNs) are good at extracting the translation-invariant features and have been widely applied in computer vision, it is still a challenging problem for CNNs to extract rotation-invariant features in VHR optical remote sensing images. In this paper we present a novel Double-Net with sample pairs from the same class as inputs to improve the performance of object detection and classification in VHR optical remote sensing images. Specifically, the proposed Double-Net contains multiple channels of CNNs in which each channel refers to a specific rotation direction and all CNNs share identical weights. Based on the output features of all channels, multiple instance learning algorithm is employed to extract the final rotation-invariant features. Experimental results on two publicly available benchmark datasets, namely Mnist-rot-12K and NWPU VHR-10, demonstrate that the presented Double-Net outperforms existing approaches on the performance of rotation-invariant feature extraction, which is especially effective under the situation of small training samples.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    9
    Citations
    NaN
    KQI
    []