Weakly Supervised Dual Learning for Facial Action Unit Recognition

2019 
Current research on facial action unit (AU) recognition typically requires fully AU-annotated facial images. Compared to facial expression labeling, AU annotation is a time-consuming, expensive, and error-prone process. Inspired by dual learning, we propose a novel weakly supervised dual learning mechanism to train facial action unit classifiers from expression-annotated images. Specifically, we consider AU recognition from facial images as the main task, and face synthesis given AUs as the auxiliary task. For AU recognition, we force the recognized AUs to satisfy the expression-dependent and expression-independent AU dependencies, i.e., the domain knowledge about expressions and AUs. For face synthesis given AUs, we minimize the difference between the synthetic face and the ground truth face, which has identical recognized and given AUs. By optimizing the dual tasks simultaneously, we successfully leverage their intrinsic connections as well as domain knowledge about expressions and AUs to facilitate the learning of AU classifiers from expression-annotated image. Furthermore, we extend the proposed weakly supervised dual learning mechanism to semi-supervised dual learning scenarios with partially AU-annotated images. Experimental results on three benchmark databases demonstrate the effectiveness of the proposed approach for both tasks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    45
    References
    9
    Citations
    NaN
    KQI
    []