A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions

2022 
Deep learning has been widely applied in various fields such as computer vision, natural language processing, and data mining. Although deep learning has achieved significant success in solving complex problems, it has been shown that deep neural networks are vulnerable to adversarial attacks, resulting in models that fail to perform their tasks properly, which limits the application of deep learning in security-critical areas. In this paper, we first review some of the classical and latest representative adversarial attacks based on a reasonable taxonomy of adversarial attacks. Then, we construct a knowledge graph based on the citation relationship relying on the software VOSviewer, visualize and analyze the subject development in this field based on the information of 5923 articles from Scopus. In the end, possible research directions for the development about adversarial attacks are proposed based on the trends deduced by keywords detection analysis. All the data used for visualization are available at: .
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []