Research on Knowledge Distillation of Generative Adversarial Networks

2021 
The compression of Generative Adversarial Networks (GANs) has been an emerging study in recent years. However, conventional compression methods can hardly be applied to GANs due to the training process and optimization target of GANs are different from the traditional classification detection network. Inspired by the recent success of knowledge distillation, we condense the recent researches on them applied to particular GANs, i.e. WGAN and CGAN for special tasks into two strategies: Soft Target Only Strategy (STOS) and Inherited Hard Target Strategy (IHTS), and propose an novel Random Hard Target Strategy, namely RHTS. In RHTS, we take the student network generator loss as the hard target of knowledge distillation when the discriminator is fixed, and the output of the teacher network generator as the soft target to distill GANs. The experiments on image datasets (MNIST, CIFAR-10), and structured-data datasets (Australian Credit Approval dataset, Credit Approval Data Set) show that STOS and IHTS are effective, and RHTS is the best. Such results are further proved on WGAN, WGAN-GP and LSGAN in order to demonstrate the better generalization ability of the strategies. Besides, the RHTS improve the stability of the GAN training process, where the performance and stability confirm the rationality of our design.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []