Fast transformation of discriminators into encoders using pre-trained GANs

2022 
A typical generative adversarial network (GAN) consists of a generator and a discriminator. Currently, finely tuned deep GANs can synthesize high-quality (HQ) images via their generators. However, the discriminator in typical GANs is only able to distinguish true or fake images in the training process. Moreover, some synthesized images from GANs are imperfect, and we can not reconstruct images via GANs. In this paper, we revisit pre-trained GANs and offer a self-supervised method to quickly transform GAN’s discriminators into encoders. We reuse parameters of the GAN’s discriminator and replace its output layer, so it can be transformed into an encoder and output reformed latent vectors. The transformation makes GAN architecture more symmetrical and allows for better performance. Based on the method, GANs can be made to reconstruct synthesized images via GAN encoders. Compared to synthesized images, these reconstructions can maintain or even attain higher quality. The code and pre-trained models are available at .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []