language-icon Old Web
English
Sign In

The Proximal Robbins-Monro Method.

2020 
The need for parameter estimation with massive datasets has reinvigorated interest in stochastic optimization and iterative estimation procedures. Stochastic approximations are at the forefront of this recent development as they yield procedures that are simple, general, and fast. However, standard stochastic approximations are often numerically unstable. Deterministic optimization, on the other hand, increasingly uses proximal updates to achieve numerical stability in a principled manner. A theoretical gap has thus emerged. While standard stochastic approximations are subsumed by the framework of Robbins and Monro (1951), there is no such framework for stochastic approximations with proximal updates. In this paper, we conceptualize a proximal version of the classical Robbins-Monro procedure. Our theoretical analysis demonstrates that the proposed procedure has important stability benefits over the classical Robbins-Monro procedure, while it retains the best known convergence rates. Exact implementations of the proximal Robbins-Monro procedure are challenging, but we show that approximate implementations lead to procedures that are easy to implement, and still dominate classical procedures by achieving numerical stability, practically without tradeoffs. Moreover, approximate proximal Robbins-Monro procedures can be applied even when the objective cannot be calculated analytically, and so they generalize stochastic proximal procedures currently in use.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    11
    Citations
    NaN
    KQI
    []