Loading [a11y]/accessibility-menu.js
Structure optimizations of neuromorphic computing architectures for deep neural network | IEEE Conference Publication | IEEE Xplore

Structure optimizations of neuromorphic computing architectures for deep neural network


Abstract:

This work addresses a new structure optimization of neuromorphic computing architectures. This enables to speed up the DNN (deep neural network) computation twice as fast...Show More

Abstract:

This work addresses a new structure optimization of neuromorphic computing architectures. This enables to speed up the DNN (deep neural network) computation twice as fast as, theoretically, that of the existing architectures. Precisely, we propose a new structural technique of mixing both of the dendritic and axonal based neuromorphic cores in a way to totally eliminate the inherent non-zero waiting time between cores in the DNN implementation. In addition, in conjunction with the new architecture we propose a technique of maximally utilizing computation units so that the resource overhead of total computation units can be minimized. We have provided a set of experimental data to demonstrate the effectiveness (i.e., speed and area) of our proposed architectural optimizations: ~2× speedup with no accuracy penalty on the neuromorphic computation or improved accuracy with no additional computation time.
Date of Conference: 19-23 March 2018
Date Added to IEEE Xplore: 23 April 2018
ISBN Information:
Electronic ISSN: 1558-1101
Conference Location: Dresden, Germany

Contact IEEE to Subscribe

References

References is not available for this document.