SP-PIM: A 22.41TFLOPS/W, 8.81Epochs/Sec Super-Pipelined Processing-In-Memory Accelerator with Local Error Prediction for On-Device Learning | IEEE Conference Publication | IEEE Xplore

SP-PIM: A 22.41TFLOPS/W, 8.81Epochs/Sec Super-Pipelined Processing-In-Memory Accelerator with Local Error Prediction for On-Device Learning


Abstract:

This paper presents SP-PIM that demonstrates real-time on-device learning based on the holistic, multi-level pipelining scheme enabled by local error prediction. It intro...Show More

Abstract:

This paper presents SP-PIM that demonstrates real-time on-device learning based on the holistic, multi-level pipelining scheme enabled by local error prediction. It introduces the local error prediction unit to make the training algorithm pipelineable, while reducing computation overhead and overall external memory access based on power-of-two arithmetic operations and random weights. Its double-buffered PIM macro is designed for performing both forward propagation and gradient calculation, while the dual-sparsity-aware circuits exploit sparsity in activation and error. Finally, the 5.76mm2 SP-PIM chip fabricated in 28nm process achieves 8.81Epochs/Sec model training on chip with the state-of-the-art 560.6GFLOPS/mm2 area efficiency and 22.4TFLOPS/W power efficiency.
Date of Conference: 11-16 June 2023
Date Added to IEEE Xplore: 24 July 2023
ISBN Information:

ISSN Information:

Conference Location: Kyoto, Japan

Contact IEEE to Subscribe

References

References is not available for this document.