Beyond Online Balanced Descent: An Optimal Algorithm for Smoothed Online Optimization

2019 
We study online convex optimization in a setting where the learner seeks to minimize the sum of a per-round hitting cost and a movement cost which is incurred when changing decisions between rounds. We prove a new lower bound on the competitive ratio of any online algorithm in the setting where the costs are m-strongly convex and the movement costs are the squared l_2 norm. This lower bound shows that no algorithm can achieve a competitive ratio that is o(m^(−1/2)) as mtends to zero. No existing algorithms have competitive ratios matching this bound, and we show that the state-of-the-art algorithm, Online Balanced Decent (OBD), has a competitive ratio that is Ω(m^(−2/3)). We additionally propose two new algorithms, Greedy OBD (G-OBD) and Regularized OBD (R-OBD) and prove that both algorithms have an O(m^(−1/2)) competitive ratio. The result for G-OBD holds when the hitting costs are quasiconvex and the movement costs are the squared l_2 norm, while the result for R-OBD holds when the hitting costs are m-strongly convex and the movement costs are Bregman Divergences. Further, we show that R-OBD simultaneously achieves constant, dimension-free competitive ratio and sublinear regret when hitting costs are strongly convex.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    25
    Citations
    NaN
    KQI
    []