PVD-FL: A Privacy-Preserving and Verifiable Decentralized Federated Learning Framework

2022 
Over the past years, the increasingly severe data island problem has spawned an emerging distributed deep learning framework—federated learning, in which the global model can be constructed over multiple participants without directly sharing their raw data. Despite its promising prospect, there are still many security challenges in federated learning, such as privacy preservation and integrity verification. Furthermore, federated learning is usually performed with the assistance of a center, which is prone to cause trust worries and communicational bottlenecks. To tackle these challenges, in this paper, we propose a privacy-preserving and verifiable decentralized federated learning framework, named PVD-FL, which can achieve secure deep learning model training under a decentralized architecture. Specifically, we first design an efficient and verifiable cipher-based matrix multiplication (EVCM) algorithm to execute the most basic calculation in deep learning. Then, by employing EVCM, we design a suite of decentralized algorithms to construct the PVD-FL framework, which ensures the confidentiality of both global model and local update and the verification of every training step. Detailed security analysis shows that PVD-FL can well protect privacy against various inference attacks and guarantee training integrity. In addition, the extensive experiments on real-world datasets also demonstrate that PVD-FL can achieve lossless accuracy and practical performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    38
    References
    0
    Citations
    NaN
    KQI
    []