Abstract
In this paper, we focus on LSTM (Long Short-Term Memory) networks and their implementation in a popular framework called Keras. The goal is to show how to take advantage of their ability to pass the context by holding the state and to clear up what the stateful property of LSTM Recurrent Neural Network implemented in Keras actually means. The main outcome of the work is then a general algorithm for packing arbitrary context-dependent data, capable of 1/ packing the data to fit the stateful models; 2/ making the training process efficient by supplying multiple frames together; 3/ on-the-fly (frame-by-frame) prediction by the trained model. Two training methods are presented, a window-based approach is compared with a fully-stateful approach. The analysis is performed on the Speech commands dataset. Finally, we give guidance on how to use stateful LSTMs to create a key-phrase detection system.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bosch, R.: Stateful LSTM model training in keras (2018). https://fairyonice.github.io/Stateful-LSTM-model-training-in-Keras.html
Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, pp. 1724–1734. Association for Computational Linguistics, October 2014. https://www.aclweb.org/anthology/D14-1179
Chollet, F., et al.: Keras (2015). https://keras.io
De Jesus, O., Hagan, M.T.: Backpropagation algorithms for a broad class of dynamic networks. IEEE Trans. Neural Networks 18(1), 14–27 (2007)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735
Kaggle: Tensorflow speech recognition challenge (2017). https://www.kaggle.com/c/tensorflow-speech-recognition-challenge
Olah, C.: Understanding LSTM networks (2015). http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Pascanu, R., Mikolov, T., Bengio, Y.: On the difficulty of training recurrent neural networks. In: Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28, ICML 2013, pp. III-1310–III-1318, JMLR.org (2013). http://dl.acm.org/citation.cfm?id=3042817.3043083
Warden, P.: Speech commands: a dataset for limited-vocabulary speech recognition. CoRR abs/1804.03209 (2018). http://arxiv.org/abs/1804.03209
Zhang, Y., Suda, N., Lai, L., Chandra, V.: Hello edge: keyword spotting on microcontrollers. CoRR abs/1711.07128 (2017). http://arxiv.org/abs/1711.07128
Acknowledgement
This work was supported by The Ministry of Education, Youth and Sports of the Czech Republic project No. LO1506.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Bulín, M., Šmídl, L., Švec, J. (2019). On Using Stateful LSTM Networks for Key-Phrase Detection. In: Ekštein, K. (eds) Text, Speech, and Dialogue. TSD 2019. Lecture Notes in Computer Science(), vol 11697. Springer, Cham. https://doi.org/10.1007/978-3-030-27947-9_24
Download citation
DOI: https://doi.org/10.1007/978-3-030-27947-9_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-27946-2
Online ISBN: 978-3-030-27947-9
eBook Packages: Computer ScienceComputer Science (R0)