Adversarial Examples Generation System Based on Gradient Shielding of Restricted Region

2020 
In recent years, deep neural networks have greatly facilitated machine learning tasks. However, emergence of adversarial examples revealed the vulnerability of neural networks. As a result, security of neural networks is drawing more research attention than before and a large number of attack methods have been proposed to generate adversarial examples to evaluate the robustness of neural networks. Furthermore, adversarial examples can be widely adopted in machine vision, natural language processing, and video recognition applications. In this paper, we study adversarial examples against image classification networks. Inspired by the method of detecting key regions in object detection tasks, we built a restrict region-based adversarial example generation system for image classification. We proposed a novel method called gradient mask to generate adversarial examples that have a high attack success rate and small disturbances. The experimental results also validated the method’s performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    0
    Citations
    NaN
    KQI
    []