MI-FGSM on Faster R-CNN Object Detector

2020 
The adversarial examples show the vulnerability of deep neural networks, which makes adversarial attacks widely concerned. However, most of the attack methods are based on image classification model. In this paper, we use Momentum Iterative Fast Gradient Sign Method (MI-FGSM), which stabilize optimization and escape from poor local maxima, to generate adversarial examples on the Faster R-CNN object detector. We have made some improvements on the previous object detection attack methods. The best current attack method, Project Gradient Descent (PGD) on object detection, starts from a random value, resulting in the uncertainty of the attack result. In contrast, our attacks are more stable and powerful in both white-box attacks and black-box attacks, and can better adapt to various neural network architectures. Experiment on Pascal VOC2007 shows that, under same setting of white-box attack, PGD has 0.23% mean average precision (mAP) on Faster R-CNN with VGG16, while our method achieves 0.17%. In addition, we analyze the difference between classification and detection attacks, and find that in addition to misclassification, the adversarial examples produced by detection models can also lead to mislocation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    7
    References
    0
    Citations
    NaN
    KQI
    []