Adversarial Robustness in Deep Learning: From Practices to Theories

2021 
Deep neural networks (DNNs) have achieved unprecedented accomplishments in various machine learning tasks. However, recent studies demonstrate that DNNs are extremely vulnerable to adversarial examples. They are manually synthesized input samples which look benign but can severely fool the prediction of DNN models. For machine learning practitioners who are applying DNNs, understanding the behavior of adversarial examples will not only help them improve the safety of their models, but also can help them have deeper insights into the working mechanism of the DNNs. In this tutorial, we provide a comprehensive overview on the recent advances of adversarial examples and their countermeasures, from both practical and theoretical perspectives. From the practical aspect, we give a detailed introduction of the popular algorithms to generate adversarial examples under different adversary's goals. We also discuss how the defending strategies are developed to resist these attacks, and how new attacks come out to break these defenses. From the theoretical aspect, we discuss a series of intrinsic behaviors of robust DNNs which are different from traditional DNNs, especially about their optimization and generalization properties. Finally, we introduce DeepRobust, a Pytorch adversarial learning library which aims to build a comprehensive and easy-to-use platform to foster this research field. Via our tutorial, the audience can grip the main ideas of adversarial attacks and defenses and gain a deep insight of DNN's robustness. The tutorial official website is at https://sites.google.com/view/kdd21-tutorial-adv-robust.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    4
    References
    0
    Citations
    NaN
    KQI
    []