Secure Multiparty Learning from Aggregation of Locally Trained Models.

2019 
In this paper, we propose a new protocol for secure multiparty learning (SML) from the aggregation of locally trained models, by using homomorphic proxy re-encryption and aggregate signature techniques. In our scheme, we utilize the method of secure verifiable computation delegation to privately generate labels for auxiliary unlabeled public data. Based on the labeled dataset, a central entity can learn a global learning model without direct access to the local private datasets. The generalization performance of SML is excellent and almost equals to the accuracy of the model learned from the union of all the parties’ datasets. We implement SML on MNIST, and extensive analysis shows that our method is effective, efficient and secure.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    17
    References
    3
    Citations
    NaN
    KQI
    []