Abstract
In this paper we propose an accelerator for the implementation of Long Short-Term Memory layer in Recurrent Neural Networks. We analyze the effect of quantization on the accuracy of the network and we derive an architecture that improves the throughput and latency of the accelerator. The proposed technique only requires one training process, hence reducing the design time. We present implementation results of the proposed accelerator. The performance compares favorably with other solutions presented in Literature.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Han et al.: ESE: efficient speech recognition engine with sparse LSTM on FPGA. In: Proceedings of the 2017 ACM/SIGDA FPGA 2017, Monterey, CA, USA, Feb. 2017, pp. 75–84
Wang, W., Xie, Y., Ren, L., Zhu, X., Chang, R., Yin, Q.: Detection of data injection attack in industrial control system using long short term memory recurrent neural network. In: 13th IEEE ICIEA, Wuhan, China, pp. 2710–2715 (2018). https://doi.org/10.1109/iciea.2018.8398169
Zou, L., Gu, Y., Song, J., Liu, W., Yao, Y.: Long short-term memory based recurrent neural networks for collaborative filtering. In: IEEE UIC 2017 San Francisco, CA, USA, pp. 1–6. https://doi.org/10.1109/uic-atc.2017.8397539
Ardakani et al.: An Architecture to accelerate convolution in deep neural networks. IEEE TCAS I: Regular Papers 65(4) (2018)
Price et al.: A low-power speech recognizer and voice activity detector using deep neural networks. IEEE JSSC 53(1) (2018)
Moini et al.: A resource-limited hardware accelerator for convolutional neural networks in embedded vision applications. IEEE TCAS II: Express Briefs 64(10) (2017)
JANUARY 2018, pp. 198–208.UCI Machine Learning Repository: Japanese Vowels Dataset. https://archive.ics.uci.edu/ml/datasets/Japanese+Vowels
Chang et al.: Recurrent Neural Networks Hardware Implementation on FPGA (2015). https://arxiv.org/abs/1511.05552v4
Du et al.: A Reconfigurable streaming deep convolutional neural network accelerator for internet of things. IEEE TCASI 65(1) (2018)
Hyn-d-man, R.J.: Time Series DataLibrary. https://datamarket.com/data/list/?q=cat:g24%20provider:tsdl
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Abdolzadeh, V., Petra, N. (2019). Efficient Implementation of Recurrent Neural Network Accelerators. In: Saponara, S., De Gloria, A. (eds) Applications in Electronics Pervading Industry, Environment and Society. ApplePies 2018. Lecture Notes in Electrical Engineering, vol 573. Springer, Cham. https://doi.org/10.1007/978-3-030-11973-7_44
Download citation
DOI: https://doi.org/10.1007/978-3-030-11973-7_44
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-11972-0
Online ISBN: 978-3-030-11973-7
eBook Packages: EngineeringEngineering (R0)