Semantically Consistent Text to Fashion Image Synthesis with an enhanced Attentional Generative Adversarial Network

2020 
Abstract Recent advancements in Generative Adversarial Networks (GANs) have led to significant improvements in various image generation tasks including image synthesis based on text descriptions. In this paper, we present an enhanced Attentional Generative Adversarial Network (e-AttnGAN) with improved training stability for text-to-image synthesis. e-AttnGAN’s integrated attention module utilizes both sentence and word context features and performs feature-wise linear modulation (FiLM) to fuse visual and natural language representations. In addition to multimodal similarity learning for text and image features of AttnGAN [1], similarity and feature matching losses between real and generated images are included while employing classification losses for “significant attributes”. In order to improve the stability of the training and solve the mode collapse issue, spectral normalization and two-time scale update rule are used for the discriminator together with instance noise. Our experiments show that e-AttnGAN outperforms state-of-the-art methods using the FashionGen and DeepFashion-Synthesis datasets in terms of inception score, R-precision and classification accuracy. A detailed ablation study has been conducted to observe the effect of each component.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    70
    References
    18
    Citations
    NaN
    KQI
    []