Abstract
Embedding is widely used in most natural language processing. e.g., neural machine translation, text classification, text abstraction and sentiment analysis etc. Word-based embedding is faster and character-based embedding performs better. In this paper, we explore a way to combine these two embeddings to bridge the gap between word-based and character-based embedding in speed and performance. In the experiments and analysis of Hybrid Embedding, we found it’s difficult to make these two different embeddings generate the same embedding vector, but we still obtain a comparable result. According to the results of analysis, we explore a form of character-based embedding called Cached Embedding that can achieve almost the same performance and reduce the extra training time by almost half compared to character-based embedding.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Cherry, C.A., Foster, G., Bapna, A., Firat, O., Macherey, W.: Revisiting character-based neural machine translation with capacity and compression. In: Empirical Methods in Natural Language Processing (2018)
Chung, J., Cho, K., Bengio, Y.: A character-level decoder without explicit segmentation for neural machine translation. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany (volume 1: Long Papers), pp. 1693–1703. Association for Computational Linguistics (2016)
Graves, A.: Generating sequences with recurrent neural networks. arXiv:1308.0850 [cs], August 2013
Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580 [cs]. July 2012
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Kiefer, J., Wolfowitz, J.: Stochastic estimation of the maximum of a regression function. Ann. Math. Stat. 23(3), 462–466 (1952)
Kim, Y., Jernite, Y., Sontag, D., Rush, A.M.: Character-aware neural language models. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI 2016, pp. 2741–2749. AAAI Press (2016)
Kudo, T.: Subword regularization: improving neural network translation models with multiple subword candidates. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia (vol. 1: Long Papers), pp. 66–75. Association for Computational Linguistics (2018)
Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML 2010, pp. 807–814. Omnipress, USA (2010)
Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (2002)
Sennrich, R., Haddow, B., Birch, A.: Neural machine translation of rare words with subword units. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany (vol. 1: Long Papers), pp. 1715–1725. Association for Computational Linguistics, August 2016
Srivastava, R.K., Greff, K., Schmidhuber, J.: Highway networks. arXiv:1505.00387 [cs], May 2015
Zaremba, W., Sutskever, I., Vinyals, O.: Recurrent neural network regularization. arXiv:1409.2329 [cs], September 2014
Zhang, W., Feng, Y., Meng, F., You, D., Liu, Q.: Bridging the gap between training and inference for neural machine translation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4334–4343, July 2019
Acknowledgements
This work was supported by National Science Foundation of China (Grant No. 61772075), National Science Foundation of China (Grant No. 61772081), Scientific Research Project of Beijing Educational Committee (Grant No. KM201711232022), Beijing Municipal Education Committee (Grant No. SZ20171123228), Beijing Institute of Computer Technology and Application (Grant by Extensible Knowledge Graph Construction Technique Project).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Yang, Y., Zhang, HP., Wu, L., Liu, X., Zhang, Y. (2020). Cached Embedding with Random Selection: Optimization Technique to Improve Training Speed of Character-Aware Embedding. In: Nguyen, N., Jearanaitanakij, K., Selamat, A., Trawiński, B., Chittayasothorn, S. (eds) Intelligent Information and Database Systems. ACIIDS 2020. Lecture Notes in Computer Science(), vol 12033. Springer, Cham. https://doi.org/10.1007/978-3-030-41964-6_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-41964-6_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-41963-9
Online ISBN: 978-3-030-41964-6
eBook Packages: Computer ScienceComputer Science (R0)