DeepTempo: A Hardware-Friendly Direct Feedback Alignment Multi-Layer Tempotron Learning Rule for Deep Spiking Neural Networks | IEEE Journals & Magazine | IEEE Xplore

DeepTempo: A Hardware-Friendly Direct Feedback Alignment Multi-Layer Tempotron Learning Rule for Deep Spiking Neural Networks


Abstract:

Layer-by-layer error back-propagation (BP) in deep spiking neural networks (SNN) involves complex operations and a high latency. To overcome these problems, we propose a ...Show More

Abstract:

Layer-by-layer error back-propagation (BP) in deep spiking neural networks (SNN) involves complex operations and a high latency. To overcome these problems, we propose a method to efficiently and rapidly train deep SNNs, by extending the well-known single-layer Tempotron learning rule to multiple SNN layers under the Direct Feedback Alignment framework that directly projects output errors onto each hidden layer via a fixed random feedback matrix. A trace-based optimization for Tempotron learning is also proposed. Using such two techniques, our learning process becomes spatiotemporally local and is very plausible for neuromorphic hardware implementations. We applied the proposed hardware-friendly method in training multi-layer and deep SNNs, and obtained comparably high recognition accuracies on the MNIST and ETH-80 datasets.
Page(s): 1581 - 1585
Date of Publication: 04 March 2021

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.