Poison Frogs! Targeted Clean-Label Poisoning Attacks On Neural Networks

Authors:
Ali Shafahi University of Maryland
Ronny Huang UMCP and EY
Mahyar Najibi University of Maryland
Octavian Suciu University of Maryland
Christoph Studer Cornell University
Tudor Dumitras University of Maryland
Tom Goldstein University of Maryland

Abstract:

Data poisoning is an attack on machine learning models wherein the attacker adds examples to the training set to manipulate the behavior of the model at test time. This paper explores poisoning attacks on neural nets. The proposed attacks use ``clean-labels''; they don't require the attacker to have any control over the labeling of training data. They are also targeted; they control the behavior of the classifier on a specific test instance without degrading overall classifier performance. For example, an attacker could add a seemingly innocuous image (that is properly labeled) to a training set for a face recognition engine, and control the identity of a chosen person at test time. Because the attacker does not need to control the labeling function, poisons could be entered into the training set simply by putting them online and waiting for them to be scraped by a data collection bot.

You may want to know: