Towards Understanding Acceleration Tradeoff Between Momentum And Asynchrony In Nonconvex Stochastic Optimization

Authors:
Tianyi Liu Georgia Institute of Technolodgy
Shiyang Li University of California, Santa Barbara
Jianping Shi Sensetime Group Limited
Enlu Zhou Georgia Institute of Technology
Tuo Zhao Georgia Tech

Introduction:

Asynchronous momentum stochastic gradient descent algorithms (Async-MSGD) have been widely used in distributed machine learning, e.g., training large collaborative filtering systems and deep neural networks.Therefore, the authors propose to analyze the algorithm through a simpler but nontrivial nonconvex problems --- streaming PCA.

Abstract:

Asynchronous momentum stochastic gradient descent algorithms (Async-MSGD) have been widely used in distributed machine learning, e.g., training large collaborative filtering systems and deep neural networks. Due to current technical limit, however, establishing convergence properties of Async-MSGD for these highly complicated nonoconvex problems is generally infeasible. Therefore, we propose to analyze the algorithm through a simpler but nontrivial nonconvex problems --- streaming PCA. This allows us to make progress toward understanding Aync-MSGD and gaining new insights for more general problems. Specifically, by exploiting the diffusion approximation of stochastic optimization, we establish the asymptotic rate of convergence of Async-MSGD for streaming PCA. Our results indicate a fundamental tradeoff between asynchrony and momentum: To ensure convergence and acceleration through asynchrony, we have to reduce the momentum (compared with Sync-MSGD). To the best of our knowledge, this is the first theoretical attempt on understanding Async-MSGD for distributed nonconvex stochastic optimization. Numerical experiments on both streaming PCA and training deep neural networks are provided to support our findings for Async-MSGD.

You may want to know: