Directional Self-supervised Learning for Risky Image Augmentations

2021 
Only a few cherry-picked robust augmentation policies are beneficial to standard self-supervised image representation learning, despite the large augmentation family. In this paper, we propose a directional self-supervised learning paradigm (DSSL), which is compatible with significantly more augmentations. Specifically, we adapt risky augmentation policies after standard views augmented by robust augmentations, to generate harder risky view (RV). The risky view usually has a higher deviation from the original image than the standard robust view (SV). Unlike previous methods equally pairing all augmented views for symmetrical self-supervised training to maximize their similarities, DSSL treats augmented views of the same instance as a partially ordered set (SV$\leftrightarrow $SV, SV$\leftarrow$RV), and then equips directional objective functions respecting to the derived relationships among views. DSSL can be easily implemented with a few lines of Pseudocode and is highly flexible to popular self-supervised learning frameworks, including SimCLR, SimSiam, BYOL. The extensive experimental results on CIFAR and ImageNet demonstrated that DSSL can stably improve these frameworks with compatibility to a wider range of augmentations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    0
    Citations
    NaN
    KQI
    []