A Distributed Stochastic Proximal-Gradient Algorithm for Composite Optimization

2021 
In this paper, we consider distributed composite optimization problems involving a common non-smooth regularization term over an undirected and connected network. Inspired by vast applications of this kind of problems in large-scale machine learning, the local cost function of each node is further set as an average of a certain amount of local cost subfunctions. For this scenario, most existing solutions based on the proximal method tend to ignore the cost of gradient evaluations, which results in degraded performance. We instead develop a distributed stochastic proximal-gradient algorithm to tackle the problems by employing the local unbiased stochastic averaging gradient method. At each iteration, only a single local cost subfunction is demanded by each node to evaluate the gradient, then the average of latest stochastic gradients serves as the approximation of the true local gradient. An explicit linear convergence rate of the proposed algorithm is established with constant dual step-sizes for strongly convex local cost subfunctions with Lipschitz-continuous gradients. Furthermore, it is shown that, in the smooth cases, our simplified analysis technique can be extended to some n
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    41
    References
    0
    Citations
    NaN
    KQI
    []