PRIAG: Proximal Reweighted Incremental Aggregated Gradient Algorithm for Distributed Optimizations

2020 
Large-scale machine learning problems are nowadays tackled by distributed optimization algorithms, i.e., algorithms that leverage multiple workers for training. However, collecting the information from all workers in every iteration is sometimes expensive or even prohibitive. In this paper, we propose an iterative algorithm called proximal reweighted incremental aggregated gradient (PRIAG) for solving a class of nonconvex and nonsmooth problems, which are ubiquitous in machine learning tasks and distributed optimization problems. In each iteration, this algorithm just needs the information from one worker due to the incremental aggregated method. Combined with the reweighted technique, we only require an easy-to-calculate proximal operator to deal with the nonconvex and nonsmooth properties. Using the Lyapunov function analysis method, we prove that the PRIAG algorithm is convergent under some mild assumptions. We apply this approach to nonconvex nonsmooth problems and distributed optimization tasks. Numerical experiments on both synthetic and real data sets show that our algorithm can achieve comparative learning performance, but more efficiently, compared with previous nonconvex solvers.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    0
    Citations
    NaN
    KQI
    []