Loading [a11y]/accessibility-menu.js
Architecture Analysis for Symmetric Simplicial Deep Neural Networks on Chip | IEEE Conference Publication | IEEE Xplore

Architecture Analysis for Symmetric Simplicial Deep Neural Networks on Chip


Abstract:

Convolutional Neural Networks (CNN) are the dom-inating Machine Learning (ML) architecture used for complex tasks such as image classification despite their required usag...Show More

Abstract:

Convolutional Neural Networks (CNN) are the dom-inating Machine Learning (ML) architecture used for complex tasks such as image classification despite their required usage of heavy computational resources, large storage space and power-demanding hardware. This motivates the exploration of alternative implementations using efficient neuromorphic hardware for resource constrained applications. Conventional Simplicial Piece-Wise Linear implementations allow the development of efficient hardware to run DNNs by avoiding multipliers, but demand large memory requirements. Symmetric Simplicial (SymSim) functions preserve the efficiency of the implementation while reducing the number of parameters per layer, and can be trained to replace convolutional layers and natively run non-linear filters such as MaxPool. This paper analyzes architectures to implement a Neural Network accelerator for SymSim operations optimizing the number of parallel cores to reduce the computational time. For this, we develop a model that takes into account the core processing times as well as the data transfer times.
Date of Conference: 22-24 March 2023
Date Added to IEEE Xplore: 10 April 2023
ISBN Information:
Conference Location: Baltimore, MD, USA

References

References is not available for this document.