PSceneGAN: Multi-Domain Particular Scenes Generation Based on Conditional Generative Adversarial Networks

2019 
Generative adversarial networks (GANs) have made remarkable success in image generations. However, how to deal with the multi-domain particular scenes generation, which converts specific object images to different reasonable scene domians, is still an open problem. In this paper, we propose a multi-domain particular scene generation model named PSceneGAN (Particular Scene Generative Adversarial Nets) that is a novel dual-condition GAN. PSceneGAN is the first model to achieve one-to-many specific scene generation under the guidance of semantics using only one model. In addititon, we collect and label a novel high-quality clothing data set named DRESS and use it to verify our PSceneGAN through a challenging task. The results show that PSceneGAN not only accurately generates corresponding reasonable scene images according to input scene and semantic descriptions, but also achieves desired results in quantitative and qualitative evaluation, among which frechet inception distance (FID) and inception score (IS) are 25.40 and 36.24, respectively.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []