Enhancing Robustness of Classifiers Based on PCA

2021 
To date, deep learning techniques have been widely used. However, deep neural networks (DNNs) are vulnerable to adversarial attacks, which has become one of the hidden risks issues affecting system security. The adversarial sample is a perturbation input to fool the deep learning model. The inherent weakness of DNNs that lacks robustness to adversarial samples brings security problems, especially for tasks that require high reliability. This paper proposed a robustness enhancing method based on principal component analysis (PCA) and applied it to deep networks, which enhanced the ability of DNNs to resist adversarial attacks. Specifically, the proposed method firstly used PCA to downscale the clean samples, and then, chose two non-target attacks, DeepFool and FGSM, to craft adversarial samples pre-and-post downscale. Finally, by evaluating the changes in the robustness of the classifier, we draw the corresponding analytical conclusions. Experimental results on MNIST show that the proposed method makes deep networks more robust against white-box attacks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    4
    References
    0
    Citations
    NaN
    KQI
    []