A Hybrid RRAM-SRAM Computing-In-Memory Architecture for Deep Neural Network Inference-Training Edge Acceleration
2021
This paper presents a hybrid computing-in-memory architecture for inference and training stages of a two-layer deep neural network, with 96 Kb RRAM and 4Kb 7T SRAM. Combining merits of RRAM and SRAM, the hybrid architecture provides fast weight-updating for training, while achieves 997x lower standby power consumption and 1.35x higher area efficiency than SRAM-only scheme. A classification accuracy of 91% is obtained for resized MNIST task.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI