Robust Accelerated Primal-Dual Methods for Computing Saddle Points

2021 
We consider strongly convex/strongly concave saddle point problems assuming we have access to unbiased stochastic estimates of the gradients. We propose a stochastic accelerated primal-dual (SAPD) algorithm and show that SAPD iterate sequence, generated using constant primal-dual step sizes, linearly converges to a neighborhood of the unique saddle point, where the size of the neighborhood is determined by the asymptotic variance of the iterates. Interpreting the asymptotic variance as a measure of robustness to gradient noise, we obtain explicit characterizations of robustness in terms of SAPD parameters and problem constants. Based on these characterizations, we develop computationally tractable techniques for optimizing the SAPD parameters, i.e., the primal and dual step sizes, and the momentum parameter, to achieve a desired trade-off between the convergence rate and robustness on the Pareto curve. This allows SAPD to enjoy fast convergence properties while being robust to noise as an accelerated method. We also show that SAPD admits convergence guarantees for the gap metric with a variance term optimal up to a logarithmic factor --which can be removed by employing a restarting strategy. Furthermore, to our knowledge, our work is the first one showing an iteration complexity result for the gap function on smooth SCSC problems without the bounded domain assumption. Finally, we illustrate the efficiency of our approach on distributionally robust logistic regression problems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []