Geometrically Editable Face Image Translation with Adversarial Networks.

2021 
Recently, image-to-image translation has received increasing attention, which aims to map images in one domain to another specific one. Existing methods mainly solve this task via a deep generative model that they focus on exploring the bi-directional or multi-directional relationship between specific domains. Those domains are often categorized by attribute-level or class-level labels, which do not incorporate any geometric information in learning process. As a result, existing methods are incapable of editing geometric contents during translation. They also neglect to utilize higher-level and instance-specific information to further guide the training process, leading to a great deal of unrealistic synthesized images of low fidelity, especially for face images. To address these challenges, we formulate the general image translation problem as multi-domain mappings in both geometric and attribute directions within an image set that shares a same latent vector. Particularly, we propose a novel Geometrically Editable Generative Adversarial Networks (GEGAN) model to solve this problem for face images by leveraging facial semantic segmentation to explicitly guide its geometric editing. In details, input face images are encoded to their latent representations via a variational autoencoder, a segmentor network is designed to impose semantic information on the generated images, and multi-scale regional discriminators are employed to force the generator to pay attention to the details of key components. We provide both quantitative and qualitative evaluations on CelebA dataset to demonstrate our ability of the geometric modification and our improvement in image fidelity.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    51
    References
    6
    Citations
    NaN
    KQI
    []