Class-Level Prototype Guided Multiscale Feature Learning for Remote Sensing Scene Classification With Limited Labels

2022 
Remote sensing scene classification (RSSC) is an open and challenging research topic in the remote sensing (RS) community. It aims to define semantic labels for RS scenes according to their contents. Recently, with the development of deep convolutional neural networks (DCNNs), the results of RSSC have been enhanced to a large extent. However, the cracking performance of these DCNN-based models depends on a large number of labeled data. Once the volume of the labeled data is decreased, their behavior would be weakened dramatically. In this article, we propose a new training algorithm that can work smoothly with a few labeled samples to address this limitation. Along with the introduced DCNN, the presented methods perform satisfactorily. In particular, we first construct a dual-branch network (DBNet) to mine the multiscale and multiangle information from RS scenes. Thus, the abundant land covers with diverse sizes, directions, and shapes can be captured simultaneously. Then, to train DBNet using scarce semantic labels, a class-level prototype guided learning (CPGL) algorithm is developed based on the meta-learning paradigm. Besides the usual episode training manner, a prototype refinement module (PRM) and a prototype discrimination module (PDM) are designed with the help of metric learning theory to ensure the effectiveness of our CPGL. The comprehensive experiments are conducted on four public RS scene datasets, and the encouraging results imply that our DBNet and CPGL can copy with RSSC tasks with small labeled data. Our source codes are available at https://github.com/TangXu-Group/Remote-Sensing-Images-Classification/tree/main/CPGL .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []