Unifying Top-Down Views by Task-Specific Domain Adaptation

2020 
In this article, we aim to learn a unified representation of images from satellite/aerial/ground views by exploring their underlying correlations. Inspired by recent advances in domain adaptation (DA), we propose a novel task-specific DA method for this purpose. Different from traditional DA methods, this proposed method not only applies task-specific classifiers1 but also introduces domain-specific tasks for different domains during the adaptation process. The experiments are conducted on two newly proposed ground-/satellite-to-aerial scene adaptation (GSSA) data sets. Since the semantic gap between the ground/satellite scenes and the aerial scenes is much larger than that between ground scenes, the DA task between these scenes is more challenging than traditional DA tasks. On GSSA data sets, we not only demonstrate the proposed unsupervised DA method but also explore the few-shot DA in the discussion section. The proposed method is easy to implement, and our method substantially outperforms the state-of-the-art methods on the studied data sets. We hope that the proposed method for the novel GSSA data sets can be a good baseline for future researchers. The related data sets/codes will be available online.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    60
    References
    1
    Citations
    NaN
    KQI
    []