Privacy-preserving Byzantine-robust federated learning

2022 
Abstract Robustness of federated learning has become one of the major concerns since some Byzantine adversaries, who may upload false data owning to unreliable communication channels, corrupted hardware or even malicious attacks, might be concealed in the group of the distributed worker. Meanwhile, it has been proved that membership attacks and reverse attacks against federated learning can lead to privacy leakage of the training data. To address the aforementioned challenges, we propose a privacy-preserving Byzantine-robust federated learning scheme (PBFL) which takes both the robustness of federated learning and the privacy of the workers into account. PBFL is constructed from an existing Byzantine-robust federated learning algorithm and combined with distributed Paillier encryption and zero-knowledge proof to guarantee privacy and filter out anomaly parameters from Byzantine adversaries. Finally, we prove that our scheme provides a higher level of privacy protection compared to the previous Byzantine-robust federated learning algorithms.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    1
    Citations
    NaN
    KQI
    []