Adversarial Attacks on Graphs: How to Hide Your Structural Information

2021 
Deep learning has enjoyed the status of crown jewels in artificial intelligence, showing an impressive performance in various fields, especially in computer vision. However, most deep learning models are vulnerable and easy to be fooled by some slight disturbances in the input, which are called adversarial attacks. As the deep learning models are extended to graphs, adversarial attacks also threaten various graph data mining tasks, e.g., node classification, link prediction, community detection, and graph classification. One can modify the topology or features of graphs, such as manipulating a few edges or nodes, to downgrade the performance of graph algorithms. The vulnerability of these algorithms may largely hinder their applications and thus receives tremendous attention. In the current chapter, we overview the existing researches on graph adversarial attacks. In particular, we briefly summarize and classify the existing graph adversarial attack methods, e.g., heuristic, gradient and reinforcement learning, and then choose several classic adversarial attack methods on different graph tasks for detailed introduction. And finally, we also summarize the challenges in this area.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    42
    References
    1
    Citations
    NaN
    KQI
    []