Embedding Transfer via Smooth Contrastive Loss

2020 
This paper presents a novel method for embedding transfer, a task of transferring knowledge of a learned embedding model to another. Our method exploits pairwise similarities between samples in the source embedding space as the knowledge, and transfers it through a loss function used for learning target embedding models. To this end, we design a new loss called smooth contrastive loss, which pulls together or pushes apart a pair of samples in a target embedding space with strength determined by their semantic similarity in the source embedding space; an analysis of the loss reveals that this property enables more important pairs to contribute more to learning the target embedding space. Experiments on metric learning benchmarks demonstrate that our method improves performance, or reduces sizes and embedding dimensions of target models effectively. Moreover, we show that deep networks trained in a self-supervised manner can be further enhanced by our method with no additional supervision. In all the experiments, our method clearly outperforms existing embedding transfer techniques.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []