Abstract:
Recent progress in the machine learning field makes low bit-level Convolutional Neural Networks (CNNs), even CNNs with binary weights and binary neurons, achieve satisfyi...Show MoreMetadata
Abstract:
Recent progress in the machine learning field makes low bit-level Convolutional Neural Networks (CNNs), even CNNs with binary weights and binary neurons, achieve satisfying recognition accuracy on ImageNet dataset. Binary CNNs (BCNNs) make it possible for introducing low bit-level RRAM devices and low bit-level ADC/DAC interfaces in RRAM-based Computing System (RCS) design, which leads to faster read-and-write operations and better energy efficiency than before. However, some design challenges still exist: (1) how to make matrix splitting when one crossbar is not large enough to hold all parameters of one layer; (2) how to design the pipeline to accelerate the whole CNN forward process. In this paper, an RRAM crossbar-based accelerator is proposed for BCNN forward process. Moreover, the special design for BCNN is well discussed, especially the matrix splitting problem and the pipeline implementation. In our experiment, BCNNs on RRAM show much smaller accuracy loss than multi-bit CNNs for LeNet on MNIST when considering device variation. For AlexNet on ImageNet, the RRAM-based BCNN accelerator saves 58.2% energy consumption and 56.8% area compared with multi-bit CNN structure.
Date of Conference: 16-19 January 2017
Date Added to IEEE Xplore: 20 February 2017
ISBN Information:
Electronic ISSN: 2153-697X