Characterizing and Evaluating Adversarial Examples for Offline Handwritten Signature Verification

2019 
The phenomenon of adversarial examples is attracting increasing interest from the machine learning community, due to its significant impact on the security of machine learning systems. Adversarial examples are similar (from a perceptual notion of similarity) to samples from the data distribution, that “fool” a machine learning classifier. For computer vision applications, these are images with carefully crafted but almost imperceptible changes, which are misclassified. In this paper, we characterize this phenomenon under an existing taxonomy of threats to biometric systems, in particular identifying new attacks for offline handwritten signature verification systems. We conducted an extensive set of experiments on four widely used datasets: MCYT-75, CEDAR, GPDS-160, and the Brazilian PUC-PR, considering both a CNN-based system and a system using a handcrafted feature extractor. We found that attacks that aim to get a genuine signature rejected are easy to generate, even in a limited knowledge scenario, where the attacker does not have access to the trained classifier nor the signatures used for training. Attacks that get a forgery to be accepted are harder to produce, and often require a higher level of noise—in most cases, no longer “imperceptible” as previous findings in object recognition. We also evaluated the impact of two countermeasures on the success rate of the attacks and the amount of noise required for generating successful attacks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    20
    Citations
    NaN
    KQI
    []