Abstract
In this paper, we analyze the performance of feed forward neural network (FFNN)-based language model in contrast with n-gram. The probability of n-gram language model was estimated based on the statistics of word sequences. The FFNN-based language model was structured by three hidden layers, 500 hidden units per each hidden layer, and 30 dimension word embedding. The performance of FFNN-based language model is better than that of n-gram by 1.5 % in terms of WER on the English WSJ domain.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Arisoy E, Sainath T, Kingsbury B, Ramabhadran B (2012) Deep neural network language models. In: NAACL-HLT 2012 workshop, pp 20–28
Bengio Y (2009) Learning deep architectures for AI. J Found Trends Mach Learn 2:1–127
Bengio Y, Ducharme R, Vincent P, Jauvin C (2003) A neural probabilistic language model. J Mach Learn Res 3:1137–1155
Bishop C (1995) Neural networks for pattern recognition. Clarendon, Oxford
Schwenk H, Gauvain J (2002) Connectionist language modeling for large vocabulary continuous speech recognition. In: International conference on acoustics, speech and signal processing, pp 765–768
Schwenk H, Gauvain J (2004) Neural network language models for conversational speech recognition. In: International conference on speech and language processing, pp 1215–1218
Schwenk H, Gauvain J (2005) Training neural network language models on very large corpora. In: Empirical methods in natural language processing, pp 201–208
Acknowledgments
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (No. NRF-2014R1A1A1002197).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Kim, KH. et al. (2015). Performance Analysis of FFNN-Based Language Model in Contrast with n-Gram. In: Lee, G., Kim, H., Jeong, M., Kim, JH. (eds) Natural Language Dialog Systems and Intelligent Assistants. Springer, Cham. https://doi.org/10.1007/978-3-319-19291-8_25
Download citation
DOI: https://doi.org/10.1007/978-3-319-19291-8_25
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-19290-1
Online ISBN: 978-3-319-19291-8
eBook Packages: Computer ScienceComputer Science (R0)