Controllable Image Dataset Construction Using Conditionally Transformed Inputs in Generative Adversarial Networks

2021 
In this paper, we tackle the well-known problem of dataset construction from the point of its generation using generative adversarial networks (GAN). As semantic information of the dataset should have a proper alignment with images, controlling the image generation process of GAN comes to the first position. Considering this, we focus on conditioning the generative process by solely utilizing conditional information to achieve reliable control over the image generation. Unlike the existing works that consider the input (noise or image) in conjunction with conditions, our work considers transforming the input directly to the conditional space by utilizing the given conditions only. By doing so, we reveal the relations between conditions to determine their distinct and reliable feature space without the impact of input information. To fully leverage the conditional information, we propose a novel architectural framework (i.e., conditional transformation) that aims to learn features only from a set of conditions for guiding a generative model by transforming the input to the generator. Such an approach enables controlling the generator by setting its inputs according to the specific conditions necessary for semantically correct image generation. Given that the framework operates at the initial stage of generation, it can be plugged into any existing generative models and trained in an end-to-end manner together with the generator. Extensive experiments on various tasks, such as novel image synthesis and image-to-image translation, demonstrate that the conditional transformation of inputs facilitates solid control over the image generation process and thus shows its applicability for use in dataset construction.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    64
    References
    0
    Citations
    NaN
    KQI
    []