Fine Tuning of Deep Contexts Toward Improved Perceptual Quality of In-Paintings.

2021 
Over the recent years, a number of deep learning approaches are successfully introduced to tackle the problem of image in-painting for achieving better perceptual effects. However, there still exist obvious hole-edge artifacts in these deep learning-based approaches, which need to be rectified before they become useful for practical applications. In this article, we propose an iteration-driven in-painting approach, which combines the deep context model with the backpropagation mechanism to fine-tune the learning-based in-painting process and hence, achieves further improvement over the existing state of the arts. Our iterative approach fine tunes the image generated by a pretrained deep context model via backpropagation using a weighted context loss. Extensive experiments on public available test sets, including the CelebA, Paris Streets, and PASCAL VOC 2012 dataset, show that our proposed method achieves better visual perceptual quality in terms of hole-edge artifacts compared with the state-of-the-art in-painting methods using various context models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []