Face inpainting based on GAN by facial prediction and fusion as guidance information

2021 
Abstract Face inpainting, a special case of image inpainting, aims to complete the occluded facial regions with unconstrained pose and orientation. However, existing methods generate unsatisfying results with easily detectable flaws. There are often fuzzy boundaries and details near the holes. Especially, for face inpainting, the face region semantic information (face structure, contour, and content information) has not been fully utilized, which leads to unnatural face images, such as asymmetry eyebrow and different sizes of eyes. This is unrealistic in many practical applications. To solve the problems, a new generative adversarial network by facial prediction and fusion as guidance information, is proposed for large missing regions of face inpainting. In the proposed method, two stages are adopted to complete coarse inpainting and refinement of the face. In Stage-I, we combine generator with a new encoder–decoder network with variational autoencoder-based backbone to predict the face region semantic information (including face structure, contour and content information) and do facial fusion for face inpainting. This could fully explore face region semantic information and generate coordinated coarse face images. Stage-II builds upon Stage-I results to refine face image. Both global and patch discriminators are used to synthesize high-quality photo-realistic inpainting. Experimental results on both CelebA and CelebA-HQ datasets demonstrate the effectiveness and efficiency of our method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    62
    References
    0
    Citations
    NaN
    KQI
    []