RVFace: Reliable Vector Guided Softmax Loss for Face Recognition

2022 
Face recognition has witnessed significant progress with the advances of deep convolutional neural networks (CNNs), and the central task of which is how to improve the feature discrimination. To this end, several margin-based ( e.g. , angular, additive and additive angular margins) softmax loss functions have been proposed to increase the feature margin between different classes. However, despite great achievements have been made, they mainly suffer from four issues: 1) They are based on the assumption of well-cleaned training sets, without considering the consequence of noisy labels inherently existing in most of face recognition datasets; 2) They ignore the importance of informative ( e.g., semi-hard) features mining for discriminative learning; 3) They encourage the feature margin only from the perspective of ground truth class, without realizing the discriminability from other non-ground truth classes; and 4) They set the feature margin between different classes to be same and fixed, which may not adapt the situation of unbalanced data in different classes very well. To cope with these issues, this paper develops a novel loss function, which explicitly estimates the noisy labels to drop them and adaptively emphasizes the semi-hard feature vectors from the remaining reliable ones to guide the discriminative feature learning. Thus we can address all the above issues and achieve more discriminative features for face recognition. To the best of our knowledge, this is the first attempt to inherit the advantages of feature-based noisy labels detection, feature mining and feature margin into a unified loss function. Extensive experimental results on a variety of face recognition benchmarks have demonstrated the effectiveness of our method over state-of-the-art alternatives. Our source code is available at http://www.cbsr.ia.ac.cn/users/xiaobowang/ .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []