Decoupled Representation Learning for Character Glyph Synthesis

2021 
Character glyph synthesis is still an open challenging problem, which involves two related aspects, i.e., font style transfer and content consistency. In this paper, we propose a novel model named FontGAN, which integrates the character structure stylization, de-stylization and texture transfer into a unified framework. Specifically, we decouple character images into style representation and content representation, which offers fine-grained control of these two types of variables, thus improving the quality of the generated results. To effectively capture the style information, a style consistency module (SCM) is introduced. Technically, SCM exploits category-guided Kullback-Leibler divergence to explicitly model the style representation into different prior distributions. In this way, our model is capable of implementing transformations between multiple domains in one framework. In addition, we propose content prior module (CPM) to provide content prior for the model to guide the content encoding process and alleviates the problem of stroke deficiency during structure de-stylization. Benefiting from the idea of decoupling and regrouping, our FontGAN suffices to achieve many-to-many translation tasks for glyph structure. Experimental results demonstrate that the proposed FontGAN achieves the state-of-the-art performance in character glyph synthesis.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []