Collaborative Generative Adversarial Network with Visual perception and memory reasoning

2020 
Abstract In order to address such negative issues as GAN’s mediocre image quality, high-demand for training samples and computation resources, this paper proposes the collaborative Generative Adversarial Network with Visual perception and memory reasoning (ESA-CGAN). This not only makes use of the vision self-attention mechanism and objects salience model to analyze the global information and detailed features of objects, but also designs cross-correlation self-attention module so as to make a balance between the computation efficiency and computational on the one hand and the statistical efficiency and the ability to simulate remote dependencies on the other hand. Based on convolutional long-term and short-term memory network, this paper optimizes the attention feature map so as to highlight the features of objects themselves and improve their generative abilities. Meanwhile, a cooperative learning mechanism between generators is designed, which combines self-constructed generation model and pre-training generation model to form a generation model group. It not only improves effectively the generation ability of the model, improve the computing efficiency, but also restrains the collapse of the model from another angle. In fact, the model proposed here has completed numerical experiments on multiple common standard datasets and self-configuring datasets, and has made comparisons with several mainstream generation antagonistic network models in terms of the performance of image data augmentation. The experimental results demonstrate that the model has excellent simulation ability to enable itself to effectively realize the augmentation of data, thus making it highly applicable in the future.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    1
    Citations
    NaN
    KQI
    []