Step Size Matters In Deep Learning

Authors:
Kamil Nar University of California, Berkeley
Shankar Sastry Department of EECS, UC Berkeley

Introduction:

Training a neural network with the gradient descent algorithm gives rise to a discrete-time nonlinear dynamical system.To elucidate the effects of the step size on training of neural networks, the authors study the gradient descent algorithm as a discrete-time dynamical system, and by analyzing the Lyapunov stability of different solutions, the authors show the relationship between the step size of the algorithm and the solutions that can be obtained with this algorithm.

Abstract:

Training a neural network with the gradient descent algorithm gives rise to a discrete-time nonlinear dynamical system. Consequently, behaviors that are typically observed in these systems emerge during training, such as convergence to an orbit but not to a fixed point or dependence of convergence on the initialization. Step size of the algorithm plays a critical role in these behaviors: it determines the subset of the local optima that the algorithm can converge to, and it specifies the magnitude of the oscillations if the algorithm converges to an orbit. To elucidate the effects of the step size on training of neural networks, we study the gradient descent algorithm as a discrete-time dynamical system, and by analyzing the Lyapunov stability of different solutions, we show the relationship between the step size of the algorithm and the solutions that can be obtained with this algorithm. The results provide an explanation for several phenomena observed in practice, including the deterioration in the training error with increased depth, the hardness of estimating linear mappings with large singular values, and the distinct performance of deep residual networks.

You may want to know: