A General Framework for Decentralized Optimization With First-Order Methods

2020 
Decentralized optimization to minimize a finite sum of functions, distributed over a network of nodes, has been a significant area within control and signal-processing research due to its natural relevance to optimal control and signal estimation problems. More recently, the emergence of sophisticated computing and large-scale data science needs have led to a resurgence of activity in this area. In this article, we discuss decentralized first-order gradient methods, which have found tremendous success in control, signal processing, and machine learning problems, where such methods, due to their simplicity, serve as the first method of choice for many complex inference and training tasks. In particular, we provide a general framework of decentralized first-order methods that is applicable to directed and undirected communication networks alike and show that much of the existing work on optimization and consensus can be related explicitly to this framework. We further extend the discussion to decentralized stochastic first-order methods that rely on stochastic gradients at each node and describe how local variance reduction schemes, previously shown to have promise in the centralized settings, are able to improve the performance of decentralized methods when combined with what is known as gradient tracking. We motivate and demonstrate the effectiveness of the corresponding methods in the context of machine learning and signal-processing problems that arise in decentralized environments.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    123
    References
    23
    Citations
    NaN
    KQI
    []