Deep Neural Network accelerator with Spintronic Memory

2020 
Utilizing emerging nonvolatile memories to accelerate deep neural network (DNN) has been considered as one of the promising approaches to solve the bottleneck of data transfer during the multiplication and accumulation (MAC). Among them, spintronic memories show tempting prospect due to their low access power, fast access speed, high density, and relatively mature process. As shown in fig.1, according to the principle to achieve DNN computing, it can be mainly divided into three different technical routes. The first one is an "analog" method [1, 2], as shown in fig.1(a). By transforming the digital input signals into multi-level voltage signals, and applying them to different columns of the memory array, the MAC results can be obtained in different columns with current integrator and analog to digital converter (ADC). Besides, the WL drivers can control the pulse width of different rows, to achieve the effect of multi-bit weights. This method can theoretically achieve high energy efficiency and computing speed. However, the variation of magnetic tunnel junction (MTJ) may have influence on the computing accuracy. Besides, the power consumption and area overhead of the ADC are also challenging. The other two methods are in a "digital" way, and they realize MAC computing through row-by-row read/write operation. Fig.1(b) shows the second reading-based method [3]. The weights of the neural network are stored in the memory cell. By putting the input signal to the modified sensing amplifier (SA), it can also achieve XOR function, which is the core of binary NN, with the content stored in the memory cell. Nevertheless, the modification to the SA is usually to add extra transistors in the read path, which will increase the bit error rate. Fig.1(c) shows the diagram of the last one, which is based on the "stateful logic" [4]. The input data is sent to the modified write driver when the WL receiving weight signals from outside I/O. Based on a unique logic paradigm, it can realize XOR function for BNN within 1 or several memory cells during a write cycle. In this talk, we will review the main research status of DNN accelerators based on spintronic memories. Particularly, our recent work on DNN accelerating will be introduced, which can be implemented with different spintronic memories.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    1
    References
    1
    Citations
    NaN
    KQI
    []