Stealthy and Robust Glitch Injection Attack on Deep Learning Accelerator for Target With Variational Viewpoint

2021 
Deep neural network (DNN) accelerators overcome the power and memory walls for executing neural-net models locally on edge-computing devices to support sophisticated AI applications. The advocacy of “model once, run optimized anywhere” paradigm introduces potential new security threat to edge intelligence that is methodologically different from the well-known adversarial examples. Existing adversarial examples modify the input samples presented to an AI application either digitally or physically to cause a misclassification. Nevertheless, these input-based perturbations are not robust or surreptitious on multi-view target. To generate a good adversarial example for misclassifying a real-world target of variational viewing angle, lighting and distance, a decent number of target’s samples are required to extract the rare anomalies that can cross the decision boundary. The feasible perturbations are substantial and visually perceptible. In this paper, we propose a new glitch injection attack on DNN accelerator that is capable of misclassifying a target under variational viewpoints. The glitches injected into the computation clock signal induce transitory but disruptive errors in the intermediate results of the multiply-and-accumulate (MAC) operations. The attack pattern for each target of interest consists of sparse instantaneous glitches, which can be derived from just one sample of the target. Two modes of attack patterns are derived, and their effectiveness are demonstrated on four representative ImageNet models implemented on the Deep-learning Processing Unit (DPU) of FPGA edge and its DNN development toolchain. The attack success rates are evaluated on 118 objects in 61 diverse sensing conditions, including 25 viewing angles (−60° to 60°), 24 illumination directions and 12 color temperatures. In the covert mode, the success rates of our attack exceed existing stealthy adversarial examples by more than 16.3%, with only two glitches injected into ten thousands to a million cycles for one complete inference. In the robust mode, the attack success rates on all four DNNs are more than 96.2% with an average glitch intensity of 1.4% and a maximum glitch intensity of 10.2%.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    54
    References
    3
    Citations
    NaN
    KQI
    []