Obtaining Robust Models from Imbalanced Data

2022 
The vulnerability of deep neural network (DNN) models has been verified by the existence of adversarial examples. By exploiting slight changes to input examples, the generated adversarial examples can easily cause well trained DNN models make wrong predictions. Many defense methods have been proposed to improve the robustness of DNN models against adversarial examples. Among them, adversarial training has been empirically proven to be one of the most effective methods. Almost all existing studies about adversarial training are focused on balanced datasets, where each class has an equal amount of training examples. However, as datasets collected in real-world applications cannot guarantee all contained classes are uniformly distributed, it would be much challenging to obtain robust models in those real applications where the available training datasets are imbalanced. As the initial effort to study this problem, we first investigate the different behaviors between adversarially trained models and naturally trained models using imbalanced training datasets and then explore possible solutions to facilitate adversarial training under imbalanced settings.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []