Emotion Recognition Using Generative Adversarial Networks

2020 
The ability to remotely perform Emotion Recognition in complex scenarios without any particular setup is beneficial to many applications. In recent years, plenty of researchers have proposed some papers related to Emotion Recognition. However, these methods have some limitations. Generally, they must meet the target face richness, have no occlusion, and have consistent lighting. For methods that consider occlusion, imbalanced label distribution, and illumination changes, many strong assumptions about the environment (e.g., remove occluded images, the imbalanced label’s degree is small). This paper proposes an Emotion Recognition method robust to occlusion, imbalanced labels, and user-independent. Specifically, we designed a GAN-based framework to specify labels to generate pictures and restore occluded images, complementing and completing the data manifold. To solve the problem of training instability and provide a reliable training process index, we improved ACGAN. We validate on CK+ and FER2013 datasets, where our approach obtains performance comparable or superior to existing methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []