Federated Model Distillation with Noise-Free Differential Privacy

2021 
Conventional federated learning directly averaging model weights is only possible if all local models have the same model structure. Naturally, it poses a restrictive constraint for collaboration between models with heterogeneous architectures. Sharing prediction instead of weight removes this obstacle and eliminates the risk of white-box inference attacks in conventional federated learning. However, the predictions from local models are sensitive and would leak private information to the public. Currently, there is no theoretic guarantee that sharing prediction is private and secure. To address this issue, one naive approach is adding the differentially private random noise to the predictions like previous privacy works related to federated learning. Although the privacy concern is mitigated with random noise perturbation, it brings a new problem with a substantial trade-off between privacy budget and model performance. In this paper, we fill in this gap by proposing a novel framework called FedMD-NFDP, which applies the new proposed Noise-Free Differential Privacy (NFDP) mechanism into a federated model distillation framework. NFDP can effectively protect the privacy of local data with the least sacrifice of the model utility. Our extensive experimental results on various datasets validate that FedMD-NFDP can deliver not only comparable utility, communication efficiency but also provide a noise-free differential privacy guarantee. We also demonstrate the feasibility of our FedMD-NFDP by considering both IID and non-IID setting, heterogeneous model architectures, and unlabelled public datasets from a different distribution.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    22
    Citations
    NaN
    KQI
    []