Parameters Compressed Mechanism in Federated Learning for Edge Computing

2021 
With the continuous improvement of the performance of the IoT and mobile devices, the IoT technology brings convenience to people’s lives but leaks out personal privacy at the same time. Federated learning is a kind of machine learning technology that can protect data privacy. Different from current machine learning methods, Federated learning is usually facing non-Identically Independently Distributions (no-IID) data and keeps data in local places. Therefore, the effectiveness of existing machine learning methods in federated learning non-IID issues decreases greatly. At the same time, federated learning needs to interact with multiple edge learning nodes. With the increasing number of edge nodes, the cost of communication becomes higher and higher, and the parameter weight calculation of more independent nodes is also a problem. In this paper, we propose to balance the federated learning model by using Earth Mover’s Distance to calculate the weights of different node parameters. Our method can reduce the impact of data non-independent identically distributed problems on the model so that the federated learning model will not be biased to the distributed nodes in the data set. Besides, this paper also proposes a method to compress redundant communication between nodes and servers in the process of training. Experimental results show that this paper method improves the accuracy by about 4% compared with the traditional federal learning average algorithm, and the gradient communication is compressed to about 6% of the original communication times.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    1
    Citations
    NaN
    KQI
    []