Discriminator-free Generative Adversarial Attack

2021 
The Deep Neural Networks are vulnerable to adversarial examples (Figure 1), making the DNNs-based systems collapsed by adding the inconspicuous perturbations to the images. Most of the existing works for adversarial attack are gradient-based and suffer from the latency efficiencies and the load on GPU memory. The generative-based adversarial attacks can get rid of this limitation, and some relative works propose the approaches based on GAN. However, suffering from the difficulty of the convergence of training a GAN, the adversarial examples have either bad attack ability or bad visual quality. In this work, we find that the discriminator could be not necessary for generative-based adversarial attack, and propose the Symmetric Saliency-based Auto-Encoder (SSAE) to generate the perturbations, which is composed of the saliency map module and the angle-norm disentanglement of the features module. The advantage of our proposed method lies in that it is not depending on discriminator, and uses the generative saliency map to pay more attention to label-relevant regions. The extensive experiments among the various tasks, datasets, and models demonstrate that the adversarial examples generated by SSAE not only make the widely-used models collapse, but also achieves good visual quality. The code is available at: https://github.com/BravoLu/SSAE.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    0
    Citations
    NaN
    KQI
    []