Abstract:
Spiking neural networks (SNNs) are potentially an efficient way to reduce the computation load as well as the power consumption on edge devices because of the sparsely ac...Show MoreMetadata
Abstract:
Spiking neural networks (SNNs) are potentially an efficient way to reduce the computation load as well as the power consumption on edge devices because of the sparsely activated neurons and event-driven behavior. In this paper, a continuous-valued artificial neural network (ANN) with fully connections is equivalently converted into spiking operations and the parameters are quantized to low resolution. With the proposed method, data bandwidth can be reduced and the algorithm is proved to be more useful and hardware-amenable on FPGAs. From the simulation results, the ANN with 8- and 4-bit weights received accuracy drop of 0.3% and 0.6%, respectively. The conversion of the quantized ANN to SNN received acceptable error drop within 0.15%.
Date of Conference: 20-22 May 2019
Date Added to IEEE Xplore: 13 February 2020
ISBN Information: