Bridging Unpaired Facial Photos and Sketches by Line-Drawings

2021 
In this paper, we propose a novel method to learn face sketch synthesis models by using unpaired data. Our main idea is bridging the photo domain ${\mathcal{X}}$ and the sketch domain Y by using the line-drawing domain ${\mathcal{Z}}$. Specially, we map both photos and sketches to line-drawings by using a neural style transfer method, i.e. $F:{\mathcal{X}}/{\mathcal{Y}} \mapsto {\mathcal{Z}}$. Consequently, we obtain pseudo paired data $({\mathcal{Z}},{\mathcal{Y}})$, and can learn the mapping $G:{\mathcal{Z}} \mapsto {\mathcal{Y}}$ in a supervised learning manner. In the inference stage, given a facial photo, we can first transfer it to a line-drawing and then to a sketch by G ○ F. Additionally, we propose a novel stroke loss for generating different types of strokes. Our method, termed sRender, accords well with human artists’ rendering process. Experimental results demonstrate that sRender can generate multi-style sketches, and significantly outperforms existing unpaired image-to-image translation methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    0
    Citations
    NaN
    KQI
    []