Style-Guided Image-to-Image Translation for Multiple Domains

2021 
The cross-domain image translation has drawn more and more attention. It aims to translate images from a source domain into target domains, such that images can appear in multiple styles. The most popular approaches are using encoders to extract style features from the source domain and then pushing them into a generator to produce new images. However, these methods usually only suit for two domains translation, and present low diversity in multiple domains since the extracted features are roughly used as input for the generator, instead of making full use of them. In this paper, we design a novel loss function, style-guided diversity loss (Sd loss), which utilizes the extracted style features to encourage our model exploring the image space and discovering diverse images. It is proved theoretically that the proposed loss is better than the diversity sensitive loss in the state-of-the-art approaches. In addition, qualitative and quantitative experiments demonstrate the superiority of the proposed approach against several state-of-the-art approaches in terms of the quality and the diversity of translated images.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    0
    Citations
    NaN
    KQI
    []