Abstract
Deep learning practices have a large impact on many areas. Big data and key hardware developments in GPU and TPU are the main reasons behind deep learning success. The recent progress in the text analysis and classification using deep learning has been significant as well. The quality of word representation that has become much better by using methods such as Word2Vec, FastText and Glove has been important in this improvement. In this study, we aimed to improve Word2Vec word representation, which is also called embedding, by tuning its hyperparameters. The minimum word count, vector size, window size, and the number of iterations were used to improve word embeddings. We introduced two approaches, which are faster than grid search and random search, to set the hyperparameters. The word embeddings were created using documents with approximately 300 million words. A deep learning classification model that uses documents consisting of 10 different classes was applied to evaluate the quality of word embeddings. A 9% increase in classification success was achieved only by improving hyperparameters.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R.: Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41(6), 391–407 (1990)
Pang, B., Lee, L.: Opinion mining and sentiment analysis. Found. Trends Inf. Retriev. 2(1–2), 1–135 (2008)
Joulin, A., Grave, E., Bojanowski, P., Mikolov, T.: Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759 (2016)
Wang, P., Xu, B., Xu, J., Tian, G., Liu, C.-L., Hao, H.: Semantic expansion using word embedding clustering and convolutional neural network for improving short text classification. Neurocomputing 174, 806–814 (2016)
Liu, J., Chang, W.-C., Wu, Y., Yang, Y.: Deep learning for extreme multi-label text classification. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 115–124 (2017)
Hinton, G.E.: Learning distributed representations of concepts. In: Proceedings of the 8th of the Cognitive Science Society, vol. 1, pp. 12 (1986)
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing System, pp. 3111–3119 ( 2013)
Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1532–1543 (2014)
Levy, O., Goldberg, Y., Dagan, I.: Improving distributional similarity with lessons learned from word embeddings. Trans. Assoc. Comput. Linguist. 3, 211–225 (2015)
Lai, S., Liu, K., He, S., Zhao, J.: How to generate a good word embedding. IEEE Intell. Syst. 31(6), 5–14 (2016)
Caselles-Dupré, H., Lesaint, F., Royo-Letelier, J.: Word2Vec applied to recommendation: hyperparameters matter. arXiv preprint arXiv:1804.04212 (2018)
Yaghoobzadeh, Y., Kann, K., Schütze, H.: Evaluating word embeddings in multi-label classification using fine-grained name typing. In: The 3rd Workshop on Representation Learning for NLP (RepL4NLP), Melbourne, Australia, pp. 101–106 (2018)
Nooralahzadeh, F., Øvrelid, L., Lønning, J.T.: Evaluation of domain-specific word embeddings using knowledge resources, LREC (2018)
Yildiz, B., Fox, G.C.: Toward a modular and efficient distribution for web service handlers. Concurr. Comput.: Pract. Exp. 25(3), 410–426 (2013)
Yildiz, B., Fox, G., Pallickara, S.: An orchestration for distributed web service handlers. In: 3th International Conference on Internet and Web Applications and Services, pp. 638–643 (2008)
Aktas, M.S., Kaplan, S., Abacı, H., Kalipsiz, O., Ketenci, U., Turgut, U.O.: Data imputation methods for missing values in the context of clustering. In: Big Data and Knowledge Sharing in Virtual Organizations (IGI Global, 2019), pp. 240–274 (2019)
Rehurek, R., Sojka, P.: Software framework for topic modelling with large corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks (2010)
Lison, P., Kutuzov, A.: Redefining context windows for word embedding models: an experimental study. arXiv preprint arXiv:1704.05781 (2017)
Goldberg, Y.: A primer on neural network models for natural language processing. J. Artif. Intell. Res. 57, 345–420 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Yildiz, B., Tezgider, M. (2020). Learning Quality Improved Word Embedding with Assessment of Hyperparameters. In: Schwardmann, U., et al. Euro-Par 2019: Parallel Processing Workshops. Euro-Par 2019. Lecture Notes in Computer Science(), vol 11997. Springer, Cham. https://doi.org/10.1007/978-3-030-48340-1_39
Download citation
DOI: https://doi.org/10.1007/978-3-030-48340-1_39
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-48339-5
Online ISBN: 978-3-030-48340-1
eBook Packages: Computer ScienceComputer Science (R0)