language-icon Old Web
English
Sign In

Wasserstein Based EmoGANs

2021 
Nowadays, numerous generative models are powerful and becoming popular for image synthesis because their generated images are more and more similar to the actual images, especially, Generative Adversarial Networks (GANs) and their variants. Such generative models are helpful to cover the shortage of datasets in various areas with their impressive realistic looking generated images. In this paper, we proposed the EmoGANs+ model to create the compound facial expressions images with stable adversarial training by using Wasserstein loss. The proposed methodology consists of three steps: preprocessing, image generation with proposed EmoGANs+, and lastly, evaluation. Our experiments are conducted on the Multimedia Understanding Group (MUG) facial expression dataset, Extended Cohn-Kanade dataset (CK+), and Japanese Female Facial Expressions (JAFFE) dataset. Our proposed model provides high feature similarity scores between the features of generated images and ground-truth compound facial expressions images.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    6
    References
    0
    Citations
    NaN
    KQI
    []