Shielding Federated Learning: A New Attack Approach and Its Defense

2021 
Federated learning (FL) is a newly emerging distributed learning framework that is communication-efficient with user privacy guarantee. Wireless end-user devices can collaboratively train a global model while keeping their local training data private. Nevertheless, recent studies show that FL is highly susceptible to attacks from malicious users since the server cannot directly access and audit the user’s local training data. In this work, we identify a new kind of attack surface that is much easier to be carried out while remaining a high attack success rate. By exploiting the inherent flaw of the weight assignment strategy in the standard federated learning process, our attack can bypass the existing defense methods and damage the performance of the global model effectively. We then propose a new density-based detection strategy to defend against such attack by modeling the problem as anomaly detection to effectively detect anomalous updates. Experimental results on two typical datasets, MNIST and CIFAR-10, show that our attack can significantly affect the convergence of the aggregated model and reduce the accuracy of the global model. This holds true even the state-of-the-art defense strategies are deployed, while our newly proposed defense can effectively mitigate such attack.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []