Abstract:
RRAM technology is a promising candidate for implementing efficient AI accelerators with extensive multiply-accumulate operations. By scaling RRAM devices to the synaptic...Show MoreMetadata
Abstract:
RRAM technology is a promising candidate for implementing efficient AI accelerators with extensive multiply-accumulate operations. By scaling RRAM devices to the synaptic crossbar array, the computations can be realized in situ, avoiding frequent weights transfer between the processing units and memory. Besides, as the computations are conducted in the analog domain with high flexibility, applying multilevel input voltages to the RRAM devices with multilevel conductance states enhances the computational efficiency further. However, several non-idealities existing in emerging RRAM technology may degrade the reliability of the system. In this paper, we measured and investigated the impact of read disturb on RRAM devices with different input voltages, which incurs conductance drifts and introduces errors. The measured data are deployed to simulate the RRAM based AI inference engines with multilevel conductance states and input voltages. Device-to-device variability is also taken into consideration to assess the accuracy drop. Two convolutional neural networks, LeNet-5 and VGG-7, are benchmarked with MNIST and CIFAR-10 datasets, respectively. Our results show that mapping weights with differential pairs yields better robustness to read disturb and variability effects.
Published in: 2022 IEEE International Symposium on Defect and Fault Tolerance in VLSI and Nanotechnology Systems (DFT)
Date of Conference: 19-21 October 2022
Date Added to IEEE Xplore: 30 November 2022
ISBN Information: