Distributed Nonsmooth Convex Optimization over Markovian Switching Random Networks with Two Step-Sizes

2021 
This paper investigates the distributed convex optimization problem over a multi-agent system with Markovian switching communication networks. The objective function is the sum of each agent’s local nonsmooth objective function, which cannot be known by other agents. The communication network is assumed to switch over a set of weight-balanced directed graphs with a Markovian property. The authors propose a consensus sub-gradient algorithm with two time-scale step-sizes to handle the Markovian switching topologies and the absence of global gradient information. With proper selection of step-sizes, the authors prove the almost sure convergence of all agents’ local estimates to the same optimal solution when the union graph of the Markovian network’ states is strongly connected and the Markovian chain is irreducible. The convergence rate analysis is also given for specific cases. Simulations are given to demonstrate the results.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    2
    Citations
    NaN
    KQI
    []