Abstract:
As Binary Neural Networks (BNNs) started to show promising performance with limited memory and computational cost, various RRAM-based in-memory BNN accelerator designs ha...Show MoreMetadata
Abstract:
As Binary Neural Networks (BNNs) started to show promising performance with limited memory and computational cost, various RRAM-based in-memory BNN accelerator designs have been proposed. While a single RRAM cell can represent a binary weight, previous designs had to use two RRAM cells for a weight to enable XNOR operation between a binary weight and a binary activation. In this work, we propose to convert the XNOR-based computation to RRAM-friendly multiplication without any accuracy loss so that we can reduce the required number of RRAM cells by half. As the required number of cells to compute a BNN model is reduced, the energy and area overhead is also reduced. Experimental results show that the proposed inmemory accelerator architecture achieves ~1.9x area efficiency improvement and ~1.8x energy efficiency improvement over previous architectures on various image classification benchmarks.
Published in: 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS)
Date of Conference: 06-09 June 2021
Date Added to IEEE Xplore: 23 June 2021
ISBN Information: