Loading [MathJax]/extensions/MathMenu.js
Synchronized Analog Capacitor Arrays for Parallel Convolutional Neural Network Training | IEEE Conference Publication | IEEE Xplore

Synchronized Analog Capacitor Arrays for Parallel Convolutional Neural Network Training

Publisher: IEEE

Abstract:

We report a novel Synchronized Analog Capacitor Arrays (SACA) to accelerate Convolution Neural Network (CNN) training. The synchronized cross-point capacitor arrays, func...View more

Abstract:

We report a novel Synchronized Analog Capacitor Arrays (SACA) to accelerate Convolution Neural Network (CNN) training. The synchronized cross-point capacitor arrays, functioning as replicated weights kernels, train on image patches in parallel. Parallel CNN training is challenging in analog arrays because of weight divergence in the replicated kernel. Capacitor arrays solve this problem by charge sharing between correlated capacitor in the kernel replicas to keep them synchronized. Using SACA, we show we can accelerate CNN training by >100x compared to other analog accelerators.
Date of Conference: 09-12 August 2020
Date Added to IEEE Xplore: 02 September 2020
ISBN Information:

ISSN Information:

Publisher: IEEE
Conference Location: Springfield, MA, USA

References

References is not available for this document.