Multi-Representation Dynamic Adaptation Network for Remote Sensing Scene Classification

2022 
In recent years, convolutional neural networks (CNNs) have made significant progress in remote sensing scene classification (RSSC) tasks. Because obtaining a large number of labeled images is time-consuming and expensive and the generalization ability of supervised models is limited, domain adaptation is widely introduced into RSSC. However, existing adaptation approaches mainly aim to align the distribution of features in a single representation space, which results in losing information and limiting the spatial range for extracting domain-invariant features. In addition, some of the methods simultaneously align pixel-level (local) and image-level (global) features for better results but suffer from searching for the best weight of the two parts manually, which is time-consuming and computing-expensive. To overcome the above issues, a novel feature fusion-and-alignment approach named multi-representation dynamic adaptation network (MRDAN) is proposed for cross-domain RSSC. Concretely, a feature-fusion adaptation (FFA) module is embedded into the network, which maps samples to multiple representations and fuses them to obtain a broader domain-invariant feature space. Based on this hybrid space, we introduce a cross-domain dynamic feature-alignment mechanism (DFAM) to quantitatively evaluate and adjust the relative importance of the local and global adaptation losses during domain adaptation. The experimental results on the 12 transfer tasks between the UC Merced Land-Use, WHU-RS19, AID, and RSSCN7 datasets demonstrate the effectiveness of the proposed MRDAN over the state-of-the-art domain adaptation methods in RSSC.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []