Discriminativeness-Preserved Domain Adaptation for Few-Shot Learning

2020 
Existing few-shot learning (FSL) methods make the implicit assumption that the few target class samples are from the same domain as the source class samples. However, this assumption is often invalid in practice - the target classes could come from a different domain. This poses an additional domain adaptation (DA) challenge with few training samples. In this article, the problem of cross-domain few-shot learning (CD-FSL) is approached, which requires solving FSL and DA in a unified framework. To this end, we propose a novel discriminativeness-preserved domain adaptive prototypical network (DPDAPN) model. It is designed to address a specific challenge in CD-FSL: the DA objective means that the source and target data distributions need to be aligned, typically through a shared domain adaptive feature embedding space, but the FSL objective dictates that the target domain per-class distribution must be different from that of any source domain class, meaning aligning the distributions across domains may harm the FSL performance. How to achieve global domain distribution alignment while maintaining source/target per-class discriminativeness thus becomes the key. Our solution is to explicitly enhance the source/target per-class separation before domain adaptive feature embedding learning in DPDAPN to alleviate the negative effect of domain alignment on FSL. Extensive experiments show that our DPDAPN outperforms the state-of-the-art FSL and DA models, as well as their naive combinations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []