Power- and Endurance-Aware Neural Network Training in NVM-Based Platforms

2018 
Neural networks (NNs) have become the go-to tool for solving many real-world recognition and classification tasks with massive and complex data sets. These networks require large data sets for training, which is usually performed on GPUs and CPUs in either a cloud or edge computing setting. No matter where the training is performed, it is subject to tight power/energy and data storage/transfer constraints. While these issues can be mitigated by replacing SRAM/DRAM with nonvolatile memories (NVMs) which offer near-zero leakage power and high scalability, the massive weight updates performed during training shorten NVM endurance and engender high write energy. In this paper, an NVM-friendly NN training approach is proposed. Weight update is redesigned to reduce bit flips in NVM cells. Moreover, two techniques, namely, filter exchange and bitwise rotation, are proposed to respectively balance writes to different weights and to different bits of one weight. The proposed techniques are integrated and evaluated in Caffe. Experimental results show significant power savings and endurance improvements, while maintaining high inference accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    2
    Citations
    NaN
    KQI
    []