Interpretable Generative Adversarial Networks With Exponential Function

2021 
As for Generative Adversarial Networks (GANs), its interpretability may be closely related to optimization objective functions, that is, information metrics play important roles in networks training and data generation. In terms of original GAN, the objective function based on Kullback-Leibler (KL) divergence has limitations on the performance of data training and generation. Therefore, it is significant to investigate objective functions for the optimization in GANs to bring gains on the efficiency of network learning from the perspective of metrics. In this paper, the objective function with exponential form, referred from the Message Importance Measure (MIM), is adapted to replace that with logarithm form in the optimization for adversarial networks. This approach named MIM-based GAN, may provide more hidden information in terms of interpretability on training process and probability events generation. Specifically, we first analyze the intrinsic relationship between the proposed approach and other classical GANs. Moreover, compared with the original GAN, LSGAN and WGAN, we discuss its advantages on training performance in theory including sensitivity, convergence rate and so on. In addition, we do simulations on the datasets to confirm why the MIM-based GAN achieves state-of-the-art performance on training process and data generation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    0
    Citations
    NaN
    KQI
    []