Domain Generalization via Encoding and Resampling in a Unified Latent Space

2021 
Domain generalization aims to generalize a network trained on multiple domains to unknown yet related domains. Operating under the assumption that invariant information generalizes well to unknown domains, previous work has aimed to minimize the discrepancies amongst distributions across given domains. However, without prior regularization of feature distributions in unknown domains, the network in practice overfits the invariant information in the given domains. Moreover, if there are insufficient samples in given domains, then domain generalizability is limited, as diverse domain variations are not captured. To address these two drawbacks, we propose to explicitly map features in known and unknown domains onto latent space in a fixed Gaussian mixture distribution by variational coding. As a result, features in different classes follow Gaussian distributions with different mean values. The predefined latent space narrows discrepancies between known and unknown domains and effectively separates samples into different classes. Moreover, we propose to perturb sample features with gradients from the distribution regularized loss. This perturbation generates samples beyond but near the latent space of prior distributions, which has a profound impact on domain variations. Experiments and visualizations demonstrate the effectiveness of our proposed method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []