Pseudo Decoder Guided Light-Weight Architecture for Image Inpainting

2022 
Image inpainting is one of the most important and widely used approaches where input image is synthesized at the missing regions. This has various applications like undesired object removal, virtual garment shopping, etc. The methods used for image inpainting may use the knowledge of hole locations to effectively regenerate contents in an image. Existing image inpainting methods give astonishing results with coarse-to-fine architectures or with use of guided information like edges, structures, etc. The coarse-to-fine architectures require umpteen resources leading to high computation cost of the architecture. Other methods with edge or structural information depend on the available models to generate guiding information for inpainting. In this context, we have proposed computationally efficient, light-weight network for image inpainting with very less number of parameters (0.97M) and without any guided information. The proposed architecture consists of the multi-encoder level feature fusion module, pseudo decoder and regeneration decoder. The encoder multi level feature fusion module extracts relevant information from each of the encoder levels to merge structural and textural information from various receptive fields. This information is then processed with pseudo decoder followed by space depth correlation module to assist regeneration decoder for inpainting task. The experiments are performed with different types of masks and compared with the state-of-the-art methods on three benchmark datasets i.e., Paris Street View (PARIS_SV), Places2 and CelebA_HQ. Along with this, the proposed network is tested on high resolution images ( $1024\times1024$ and 2048 $\times2048$ ) and compared with the existing methods. The extensive comparison with state-of-the-art methods, computational complexity analysis, and ablation study prove the effectiveness of the proposed framework for image inpainting.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    56
    References
    0
    Citations
    NaN
    KQI
    []