Loading [MathJax]/extensions/MathMenu.js
Hardware Compilation of Deep Neural Networks: An Overview | IEEE Conference Publication | IEEE Xplore

Hardware Compilation of Deep Neural Networks: An Overview


Abstract:

Deploying a deep neural network model on a reconfigurable platform, such as an FPGA, is challenging due to the enormous design spaces of both network models and hardware ...Show More

Abstract:

Deploying a deep neural network model on a reconfigurable platform, such as an FPGA, is challenging due to the enormous design spaces of both network models and hardware design. A neural network model has various layer types, connection patterns and data representations, and the corresponding implementation can be customised with different architectural and modular parameters. Rather than manually exploring this design space, it is more effective to automate optimisation throughout an end-to-end compilation process. This paper provides an overview of recent literature proposing novel approaches to achieve this aim. We organise materials to mirror a typical compilation flow: front end, platform-independent optimisation and back end. Design templates for neural network accelerators are studied with a specific focus on their derivation methodologies. We also review previous work on network compilation and optimisation for other hardware platforms to gain inspiration regarding FPGA implementation. Finally, we propose some future directions for related research.
Date of Conference: 10-12 July 2018
Date Added to IEEE Xplore: 26 August 2018
ISBN Information:
Electronic ISSN: 2160-052X
Conference Location: Milan, Italy

Contact IEEE to Subscribe

References

References is not available for this document.