Full-Level Domain Adaptation for Building Extraction in Very-High-Resolution Optical Remote-Sensing Images

2021 
Convolutional neural networks (CNNs) have achieved tremendous success in computer vision tasks, such as building extraction. However, due to domain shift, the performance of the CNNs drops sharply on unseen data from another domain, leading to poor generalization. As it is costly and time-consuming to acquire dense annotations for remote-sensing (RS) images, developing algorithms that can transfer knowledge from a labeled source domain to an unlabeled target domain is of great significance. To this end, we propose a novel full-level domain adaptation network (FDANet) for building extraction by combining image-, feature-, and output-level information effectively. At the input level, a simple Wallis filter method is employed to transfer source images into target-like ones whereby alleviating radiometric discrepancy and achieving image-level alignment. To further reduce domain shift, adversarial learning is used to enforce feature distribution consistency constraints between the source and target images. In this way, feature-level alignment can be embedded effectively. At the output level, a mean-teacher model is introduced to enforce transformation-consistent constraint for the target output so that the regularization effect is enhanced and the uncertain predictions can be suppressed as much as possible. To further improve the performance, a novel self-training strategy is also employed by using pseudo labels. The effectiveness of the proposed FDANet is verified on three diverse high-resolution aerial datasets with different resolutions and scenarios. Extensive experimental results and ablation studies demonstrated the superiority of the proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    3
    Citations
    NaN
    KQI
    []