Efficient Detection of Byzantine Attacks in Federated Learning Using Last Layer Biases.

2020 
Federated learning (FL) is an alternative to centralized machine learning (ML) that builds a model across multiple decentralized edge devices (a.k.a. workers) that own the training data. This has two advantages: i) the data used for training are not uploaded to the server and ii) the server can distribute the training load across the workers instead of using its own resources. However, due to the distributed nature of FL, the server has no control over the behaviors of the workers. Malicious workers can, therefore, orchestrate different kinds of attacks against FL. Byzantine attacks are amongst the most common and straightforward attacks on FL. They try to prevent FL models from converging by uploading random updates. Several techniques have been proposed to detect such kind of attacks, but they usually entail a high cost for the server. This hampers one of the main benefits of FL, which is load reduction. In this work, we propose a highly efficient approach to detect workers that try to perform Byzantine FL attacks. In particular, we analyze the last layer biases of deep learning (DL) models on the server side to detect malicious workers. We evaluate our approach with two deep learning models on the MNIST and CIFAR-10 data sets. Experimental results show that our approach significantly outperforms current methods in runtime while providing similar attack detection accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    1
    Citations
    NaN
    KQI
    []