On the Benefits of Multiple Gossip Steps in Communication-Constrained Decentralized Federated Learning

2022 
Federated learning (FL) is an emerging collaborative machine learning (ML) framework that enables training of predictive models in a distributed fashion where the communication among the participating nodes are facilitated by a central server. To deal with the communication bottleneck at the server, decentralized FL (DFL) methods advocate rely on local communication of nodes with their neighbors according to a specific communication network. In DFL, it is common algorithmic practice to have nodes interleave (local) gradient descent iterations with gossip (i.e., averaging over the network) steps. As the size of the ML models grows, the limited communication bandwidth among the nodes does not permit communication of full-precision messages; hence, it is becoming increasingly common to require that messages be lossy, compressed versions of the local parameters. The requirement of communicating compressed messages gives rise to the important question: given a fixed communication budget, what should be our communication strategy to minimize the (training) loss as much as possible? In this article, we explore this direction, and show that in such compressed DFL settings, there are benefits to having multiple gossip steps between subsequent gradient iterations, even when the cost of doing so is appropriately accounted for, e.g., by means of reducing the precision of compressed information. In particular, we show that having ${\mathcal O}(\log \frac{1}{\epsilon })$ gradient iterations with constant step size - and ${\mathcal O}(\log \frac{1}{\epsilon })$ gossip steps between every pair of these iterations - enables convergence to within $\epsilon$ of the optimal value for a class of non-convex problems that arise in the training of deep learning models, namely, smooth non-convex objectives satisfying Polyak-Łojasiewicz condition. Empirically, we show that our proposed scheme bridges the gap between centralized gradient descent and DFL on various machine learning tasks across different network topologies and compression operators.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []