Scaling Unsupervised Domain Adaptation through Optimal Collaborator Selection and Lazy Discriminator Synchronization

2021 
Breakthroughs in unsupervised domain adaptation (uDA) have opened up the possibility of adapting models from a label-rich source domain to unlabeled target domains. Prior uDA works have primarily focused on improving adaptation accuracy between the given source and target domains, and considerably less attention has been paid to the challenges that arise when uDA is deployed in practical settings. This paper puts forth a novel and complementary perspective, and investigates the algorithmic challenges that arise when uDA is deployed in a distributed ML system with multiple target domains. We propose two algorithms: i) a Collaborator Selection algorithm which selects an optimal collaborator for each target domain, and makes uDA systems more accurate and flexible; ii) a distributed training strategy that allows adversarial uDA algorithms to train in a privacy-preserving manner. We provide theoretical justifications and empirical results to show that our solution significantly boosts the performance of uDA in practical settings.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []