Loading [a11y]/accessibility-menu.js
A 1W8R 20T SRAM Codebook for 20% Energy Reduction in Mixed-Precision Deep-Learning Inference Processor System | IEEE Conference Publication | IEEE Xplore

A 1W8R 20T SRAM Codebook for 20% Energy Reduction in Mixed-Precision Deep-Learning Inference Processor System


Abstract:

This study introduces a 1W8R 20T multiport memory for codebook quantization in deep-learning processors. We manufactured the memory in a 40 nm process and achieved memory...Show More

Abstract:

This study introduces a 1W8R 20T multiport memory for codebook quantization in deep-learning processors. We manufactured the memory in a 40 nm process and achieved memory read-access time at 2.75 ns and 2.7-pj/byte power consumption. In addition, we used NVDLA, which was NVIDIA’s deep-learning processor, as a motif and simulated it based on the power obtained from the actual proposed memory. The obtained power and area reduction results are 20.24% and 26.24%, respectively.
Date of Conference: 11-13 June 2023
Date Added to IEEE Xplore: 07 July 2023
ISBN Information:

ISSN Information:

Conference Location: Hangzhou, China

Contact IEEE to Subscribe

References

References is not available for this document.