GCM-Net: Towards Effective Global Context Modeling for Image Inpainting

2021 
Deep learning based inpainting methods have obtained promising performance for image restoration, however current image inpainting methods still tend to produce unreasonable structures and blurry textures when processing the damaged images with heavy corruptions. In this paper, we propose a new image inpainting method termed Global Context Modeling Network (GCM-Net). By capturing the global contextual information, GCM-Net can potentially improve the performance of recovering the missing region in the damaged images with irregular masks. To be specific, we first use four convolution layers to extract the shadow features. Then, we design a progressive multi-scale fusion block termed PMSFB to extract and fuse the multi-scale features for obtaining local features. Besides, a dense context extraction (DCE) module is also designed to aggregate the local features extracted by PMSFBs. To improve the information flow, a channel attention guided residual learning module is deployed in both the DCE and PMSFB, which can reweight the learned residual features and refine the extracted information. To capture more global contextual information and enhance the representation ability, a coordinate context attention (CCA) based module is also presented. Finally, the extracted features with rich information are decoded as the image inpainting result. Extensive results on the Paris Street View, Places2 and CelebA-HQ datasets demonstrate that our method can better recover the structures and textures, and deliver significant improvements, compared with some related inpainting methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    0
    Citations
    NaN
    KQI
    []