Faking Signals to Fool Deep Neural Networks in AMC via Few Data Points

2021 
The recent years has witnessed a rapid development of Deep Learning (DL) based Automation Modulation Classification (AMC) methods, which has proved to outperform traditional classification approaches. In order to disturb the deep neural networks for AMC, in this paper, we propose an adversarial attack method to generate fake signals for fooling DL-based classifiers. Firstly, some constraints on visual difference and recoverability of fake signals are defined. Next, a Few Data Point Attacker (FDPA) is proposed to generate fake signals with few perturbed data points via differential evolution algorithm. Some experiments are taken on a public dataset, RML 2016.10a, and the results show that fake signals generated by the FDPA can remarkably reduce the accuracies of three types of DL-based AMC classifiers, a Convolutional Neural Network (CNN) based classifier, a Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) based classifier, and a classifier combined with CNN and LSTM-RNN. The code will be available.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    38
    References
    0
    Citations
    NaN
    KQI
    []