Abstract:
This study introduces a 1W8R 20T multiport memory for codebook quantization in deep-learning processors. We manufactured the memory in a 40 nm process and achieved memory...Show MoreMetadata
Abstract:
This study introduces a 1W8R 20T multiport memory for codebook quantization in deep-learning processors. We manufactured the memory in a 40 nm process and achieved memory read-access time at 2.75 ns and 2.7-pj/byte power consumption. In addition, we used NVDLA, which was NVIDIA’s deep-learning processor, as a motif and simulated it based on the power obtained from the actual proposed memory. The obtained power and area reduction results are 20.24% and 26.24%, respectively.
Published in: 2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)
Date of Conference: 11-13 June 2023
Date Added to IEEE Xplore: 07 July 2023
ISBN Information: