Hessian-based Analysis Of Large Batch Training And Robustness To Adversaries

Authors:
Zhewei Yao UC Berkeley
Amir Gholami University of California, Berkeley
Qi Lei University of Texas at Austin
Kurt Keutzer EECS, UC Berkeley
Michael W Mahoney UC Berkeley

Introduction:

Large batch size training of Neural Networks has been shown to incur accuracyloss when trained with the current methods.Here, the authors study large batch sizetraining through the lens of the Hessian operator and robust optimization.

Abstract:

Large batch size training of Neural Networks has been shown to incur accuracyloss when trained with the current methods. The exact underlying reasons forthis are still not completely understood. Here, we study large batch sizetraining through the lens of the Hessian operator and robust optimization. Inparticular, we perform a Hessian based study to analyze exactly how the landscape of the loss function changes when training with large batch size. We compute the true Hessian spectrum, without approximation, by back-propagating the secondderivative. Extensive experiments on multiple networks show that saddle-points arenot the cause for generalization gap of large batch size training, and the resultsconsistently show that large batch converges to points with noticeably higher Hessian spectrum. Furthermore, we show that robust training allows one to favor flat areas, as points with large Hessian spectrum show poor robustness to adversarial perturbation. We further study this relationship, and provide empirical and theoretical proof that the inner loop for robust training is a saddle-free optimization problem \textit{almost everywhere}. We present detailed experiments with five different network architectures, including a residual network, tested on MNIST, CIFAR-10/100 datasets.

You may want to know: