Accelerating Federated Learning via Parallel Servers: A Theoretically Guaranteed Approach

2022 
With the growth of participating clients, the centralized parameter server (PS) will seriously limit the scale and efficiency of Federated Learning (FL). A straightforward approach to scale up the FL system is to construct a Parallel FL (PFL) system with multiple parallel PSes. However, it is unclear whether PFL can really accelerate FL or reduce the training time of FL. Even if the answer is yes, it is non-trivial to design a highly efficient parameter average algorithm for a PFL system. In this paper, we propose a completely parallelizable FL algorithm called P-FedAvg under the PFL architecture. P-FedAvg extends the well-known FedAvg algorithm by allowing multiple PSes to cooperate and train a learning model together. In P-FedAvg, each PS is only responsible for a fraction of total clients, but PSes can mix model parameters in a dedicatedly designed way so that the FL model can well converge. Different from heuristic-based algorithms, P-FedAvg is with theoretical guarantees. To be rigorous, we theoretically analyze the convergence rate of P-FedAvg in terms of the number of conducted iterations, the communication cost of each global iteration and the optimal weights for each PS to mix parameters with its neighbors. Based on theoretical analysis, we conduct a case study on five typical overlay topolgoies formed by PSes to further examine the communication efficiency under different topologies, and investigate how the overlay topology affects the convergence rate, communication cost and robustness of a PFL system. Lastly, we perform extensive experiments with real datasets to verify our analysis and demonstrate that P-FedAvg can significantly speed up FL than traditional FedAvg and other competitive baselines. We believe that our work can help to lay a theoretical foundation for building more efficient PFL systems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    46
    References
    0
    Citations
    NaN
    KQI
    []