On the Convergence of Stochastic Gradient Descent with Bandwidth-based Step Size

2021 
We investigate the stochastic gradient descent (SGD) method where the step size lies within a banded region instead of being given by a fixed formula. The optimal convergence rate under mild conditions and large initial step size is proved. Our analysis provides comparable theoretical error bounds for SGD associated with a variety of step sizes. In addition, the convergence rates for some existing step size strategies, e.g., triangular policy and cosine-wave, can be revealed by our analytical framework under the boundary constraints. The bandwidth-based step size provides efficient and flexible step size selection in optimization. We also propose a $1/t$ up-down policy and give several non-monotonic step sizes. Numerical experiments demonstrate the efficiency and significant potential of the bandwidth-based step-size in many applications.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    49
    References
    1
    Citations
    NaN
    KQI
    []