Training Neural Networks Using Features Replay

Authors:
Zhouyuan Huo University of Pittsburgh
Bin Gu Pittsburgh University
Heng Huang University of Pittsburgh

Abstract:

Training a neural network using backpropagation algorithm requires passing error gradients sequentially through the network.The backward locking prevents us from updating network layers in parallel and fully leveraging the computing resources. Recently, there are several works trying to decouple and parallelize the backpropagation algorithm. However, all of them suffer from severe accuracy loss or memory explosion when the neural network is deep. To address these challenging issues, we propose a novel parallel-objective formulation for the objective function of the neural network. After that, we introduce features replay algorithm and prove that it is guaranteed to converge to critical points for the non-convex problem under certain conditions. Finally, we apply our method to training deep convolutional neural networks, and the experimental results show that the proposed method achieves {faster} convergence, {lower} memory consumption, and {better} generalization error than compared methods.

You may want to know: