Abstract:
Current machine learning workloads are constrained by their large power and energy requirements. In order to address these issues, recent years have witnessed increasing ...Show MoreMetadata
Abstract:
Current machine learning workloads are constrained by their large power and energy requirements. In order to address these issues, recent years have witnessed increasing interest at exploring static sparsity (synaptic memory storage) and dynamic sparsity (neural activation using spikes) in neural networks in order to reduce the necessary computational resources and enable low-power event-driven network operation. Parallely, there have been efforts to realize in-memory computing circuit primitives using emerging device technologies to alleviate the memory bandwidth limitations present in CMOS based neuromorphic computing platforms. In this paper, we discuss these two parallel research thrusts and explore the manner in-which synergistic hardware-algorithm co-design in neuromorphic computing across the stack (from devices and circuits to architectural frameworks) can result in orders of magnitude efficiency compared to state-of-the-art CMOS implementations.
Date of Conference: 21-24 October 2018
Date Added to IEEE Xplore: 03 January 2019
ISBN Information: