Time-Domain Computing in Memory Using Spintronics for Energy-Efficient Convolutional Neural Network

2021 
The data transfer bottleneck in Von Neumann architecture owing to the separation between processor and memory hinders the development of high-performance computing. The computing in memory (CIM) concept is widely considered as a promising solution for overcoming this issue. In this article, we present a time-domain CIM (TD-CIM) scheme using spintronics, which can be applied to construct the energy-efficient convolutional neural network (CNN). Basic Boolean logic operations are implemented through recording the bit-line output at different moments. A multi-addend addition mechanism is then introduced based on the TD-CIM circuit, which can eliminate the cascaded full adders. To further optimize the compatibility of TD-CIM circuit for CNN, we also propose a quantization method that transforms floating-point parameters of pre-trained CNN models into fixed-point parameters. Finally, we build a TD-CIM architecture integrating with a highly reconfigurable array of field-free spin-orbit torque magnetic random access memory (SOT-MRAM) and evaluate its benefits for the quantized CNN. By performing digit recognition with the MNIST dataset, we find that the delay and energy are respectively reduced by 1.2-2.7 times and $2.4\times 10 ^{3} - 1.1\times 10 ^{4}$ times compared with STT-CIM and CRAM based on spintronic memory. Finally, the recognition accuracy can reach 98.65% and 91.11% on MNIST and CIFAR-10, respectively.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    44
    References
    8
    Citations
    NaN
    KQI
    []