Hiding All Labels for Multi-label Images: An Empirical Study of Adversarial Examples

2021 
Adversarial examples about deep learning models have been paid much attention in recent years, including single-label adversarial examples and multi-label adversarial examples. In this paper, for the first time, an empirical study of generating a multi-label adversarial example to hide all labels in a multi-label example is presented. The objective of hiding all labels in a multi-label example is to make deep learning models know nothing about the environments. That is very worthy of studying because deep learning models will say there is nothing, although the real input has more than one label. In the empirical study, we use five state-of-the-art multi-label attack algorithms, i.e., ML-CW, ML-DP, FGSM, MI-FGSM, MLA-LP, four popular datasets, i.e., VOC2007, VOC2012, NUS-WIDE and COCO, and two typical models ML-GCN and ASL for evaluation. We conduct extensive experiments and report the attack success rates, the amount of perturbations of the adversarial examples generated by state-of-the-art multi-label attack algorithms. We also report the attack performance when a typical defending algorithm based on JPEG compression is used. The work in this paper is beneficial to the future study of generating multi-label adversarial examples as well as defending them.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    0
    Citations
    NaN
    KQI
    []