Playing Against Deep Neural Network-Based Object Detectors: A Novel Bidirectional Adversarial Attack Approach

2021 
In the fields of deep learning and computer vision,the security of object detection models has received extensive attention. Revealing the security vulnerabilities resulting from adversarial attacks has become one of the most important research directions. Existing studies show that object detection models can also be threatened by adversarial examples,just like other deep neural networks based models,e.g.,those for classification. In this paper,we propose a bidirectional adversarial attack method. Firstly,the added perturbation pushes the prediction results given by the object detectors far away from the groundtruth class while getting close to the background class. Secondly,a condence loss function is designed for the region proposal network to reduce the foreground scores. Thirdly,the adversarial examples are generated by a pre-trained autoencoder,and the model is trained using an adversarial approach,which can enhance the similarity between the adversarial examples and the original image and speed up algorithm convergence. The proposed method was verified on the most popular two-stage detection framework (Faster R-CNN),and 55.1% mAP-drop were obtained. In addition,the adversarial examples have superior transferability,migrating which to the common one-stage detection framework (YOLOv3) gets a 39.5% mAP-drop.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []