Federated Optimization Based on Compression and Event-triggered Communication

2021 
Federated learning is deemed as a promising solution to large-scale machine learning problem, which enables multiple edge users cooperatively to train a global parameter model and guarantees users a basic privacy level. However, despite the increasing interest, communication expenditure usually turns out to be a major bottleneck for scaling up distributed algorithms with possibly irresponsible or limited data rate network environments. In fact, users or clients synchronize models periodically regardless of whether current models change significantly from last one, it is a waste of communication resources. Considering how much message to transmit each communication round and when to communicate, in this paper, we propose FedCET, which is a compression and event-triggered algorithm for federated learning. We present convergence analysis of algorithm and rigorous proof for smooth nonconvex, strong convex or PL condition and general convex objective functions, respectively, testifying that such communication method is efficient without affecting the convergence property of the algorithm. Further, we evaluate the proposed FedCET on several dataset to demonstrate the effectiveness compared with other methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    0
    Citations
    NaN
    KQI
    []