Model Capacity Vulnerability in Hyper-Parameters Estimation

2020 
Machine learning models are vulnerable to a variety of data perturbation. Recent research mainly focuses on the vulnerability of model training and proposes various model-oriented defense methods to achieve robust machine learning. However, most of the existing research overlooks the vulnerability of model capacity, which is more fundamental for model performance. In this paper, we study an adversarial vulnerability of model capacity caused by the poisoning on the estimation of model hyper-parameters. We further implement this vulnerability catering for the polynomial regression model, on which the evading of model-oriented detection is challenging, to illustrate the effectiveness of the adversarial vulnerability. Extensive experiments on one synthetic and three real-world data sets demonstrate that the vulnerability can effectively mislead the hyper-parameter estimation of the polynomial regression model by poisoning a few numbers of camouflage samples that cannot be detected by model-oriented defense methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    32
    References
    0
    Citations
    NaN
    KQI
    []