SOT-MRAM based Analog in-Memory Computing for DNN inference

2020 
Deep neural network (DNN) inference requires a massive amount of matrix-vector multiplications which can be computed efficiently on memory arrays in an analog fashion. This approach requires highly resistive memory devices $(> \mathrm{M}\Omega)$ with low resistance variability to implement DNN weight memories. We propose an optimized Spin-Orbit Torque MRAM (SOT-MRAM) as weight memory in Analog in-Memory Computing (AiMC) systems for DNN inference. In SOT-MRAM the write and read path are decoupled. This allows changing the MTJ resistance to the high levels required for AiMC by tuning the tunnel barrier thickness without affecting the writing. The target resistance level and variation are derived from an algorithm driven design-technology-co-optimization (DTCO) study. Resistance levels are obtained from IR-drop simulations of a convolutional neural network (CNN). Variation limits are obtained by testing two noise-resilient CNNs with conductance variability. Finally, we demonstrate experimentally that the requirements for analog DNN inference are met by SOT-MRAM stack optimization.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    15
    Citations
    NaN
    KQI
    []