Generative Memory-Guided Semantic Reasoning Model for Image Inpainting
The critical challenge of single image inpainting stems from accurate semantic inference via limited information while maintaining image quality. Typical methods for semantic image inpainting train an encoder-decoder network by learning a one-to-one mapping from the corrupted image to the inpainted version. While such methods perform well on images with small corrupted regions, it is challenging for these methods to deal with images with large corrupted area due to two potential limitations. 1) Such one-to-one mapping paradigm tends to overfit each single training pair of images; 2) The inter-image prior knowledge about the general distribution patterns of visual semantics, which can be transferred across images sharing similar semantics, is not explicitly exploited. In this paper, we propose the Generative Memory-guided Semantic Reasoning Model (GM-SRM), which infers the content of corrupted regions based on not only the known regions of the corrupted image, but also the learned inter-image reasoning priors characterizing the generalizable semantic distribution patterns between similar images. In particular, the proposed GM-SRM first pre-learns a generative memory from the whole training data to explicitly learn the distribution of different semantic patterns. Then the learned memory are leveraged to retrieve the matching semantics for the current corrupted image to perform semantic reasoning during image inpainting. While the encoder-decoder network is used for guaranteeing the pixel-level content consistency, our generative priors are favorable for performing high-level semantic reasoning, which is particularly effective for inferring semantic content for large corrupted area. Extensive experiments on Paris Street View, CelebA-HQ, and Places2 benchmarks demonstrate that our GM-SRM outperforms the state-of-the-art methods for image inpainting in terms of both visual quality and quantitative metrics.