Abstract
Natural Language Processing (NLP) problems are among the hardest Machine Learning (ML) problems due to the complex nature of the human language. The introduction of word embeddings improved the performance of ML models on various NLP tasks as text classification, sentiment analysis, machine translation, etc. Word embeddings are real-valued vector representations of words in a specific vector space. Producing quality word embeddings that are then used as input to downstream NLP tasks is important in obtaining a good performance. To accomplish it, corpora of sufficient size is needed. Corpora may be formed in a multitude of ways, including text that was originally electronic, spoken language transcripts, optical character recognition, and synthetically producing text from the available dataset. The study provides the most recent bibliometric analysis on the topic of corpora generation for learning word vector embeddings. The analysis is based on the publication data from 2006 to 2022 retrieved from Scopus scientific database. A descriptive analysis method has been employed to obtain statistical characteristics of the publications in the research area. The systematic analysis results show the field’s evolution over time and highlight influential contributions to the field. It is believed that compiled bibliometric reviews could help researchers gain knowledge of the general state of the scientific knowledge, its descriptive features, patterns, and insights to design their studies systematically.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. CoRR arXiv:abs/1607.04606 (2016)
Bullinaria, J.A., Levy, J.P.: Extracting semantic representations from word co-occurrence statistics: a computational study. Behav. Res. Methods. 39, 510–526 (2007)
Chakma, K., Das, A.: CMIR: a corpus for evaluation of code mixed information retrieval of hindi-english tweets. Computacion y Sistemas. 20, 425–434 (2016). https://doi.org/10.13053/CyS-20-3-2459
Chiu, W.T., Ho, Y.S.: Bibliometric analysis of tsunami research. Scientometrics 73, 3–17 (2007). https://doi.org/10.1007/s11192-005-1523-1
Conneau, A., Kiela, D., Schwenk, H., Barrault, L., Bordes, A.: Supervised learning of universal sentence representations from natural language inference data. CoRR arXiv:abs/1705.02364 (2017)
Geng, Y., et al.: A bibliometric review: Energy consumption and greenhouse gas emissions in the residential sector. J. Clean. Prod. 159, 301–316 (2017). https://doi.org/10.1016/j.jclepro.2017.05.091
Goldberg, Y.: Neural Network Methods in Natural Language Processing. Morgan & Claypool Publishers, San Rafael (2017)
Jaeger, S., Fulle, S., Turk, S.: Mol2vec: unsupervised machine learning approach with chemical intuition. J. Chem. Inf. Model. 58(1), 27–35 (2018). https://doi.org/10.1021/acs.jcim.7b00616. pMID: 29268609
Merigo, J.M., Gil-Lafuente, A., Yager, R.: An overview of fuzzy research with bibliometric indicators. Appl. Soft Comput. 27, 420–433 (2015). https://doi.org/10.1016/j.asoc.2014.10.035
Merigó, J.M., Gil-Lafuente, A.M., Yager, R.R.: An overview of fuzzy research with bibliometric indicators. Appl. Soft Comput. 27(C), 420–433 (2015). https://doi.org/10.1016/j.asoc.2014.10.035
Neuhaus, C., Daniel, H.D.: Data sources for performing citation analysis: an overview. J. Document. 64, 193–210 (2008). https://doi.org/10.1108/00220410810858010
Ojo, O.E., Ta, T.H., Gelbukh, A., Calvo, H., Sidorov, G., Adebanji, O.O.: Automatic hate speech detection using deep neural networks and word embedding. Computacion y Sistemas. 26(2), 1007–1013 (2022). https://doi.org/10.13053/CyS-26-2-4107
Sasaki, S., Suzuki, J., Inui, K.: Subword-based compact reconstruction for open-vocabulary neural word embeddings. IEEE/ACM Trans. Audio Speech Lang. Process. 29, 3551–3564 (2021). https://doi.org/10.1109/TASLP.2021.3125133
Shekhar, S., Sharma, D., Beg, M.: An effective BI-LSTM word embedding system for analysis and identification of language in code-mixed social media text in English and roman Hindi. Computación y Sistemas. 24, 1415–1427 (2020). https://doi.org/10.13053/cys-24-4-3151
Singla, K., Bose, J., Varshney, N.: Word embeddings for IoT based on device activity footprints. Computación y Sistemas. 23, 1043–1053 (2019). https://doi.org/10.13053/cys-23-3-3276
Tang, D., Wei, F., Yang, N., Zhou, M., Liu, T., Qin, B.: Learning sentiment-specific word embedding for Twitter sentiment classification. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1555–1565. Association for Computational Linguistics, Baltimore, Maryland, June 2014. https://doi.org/10.3115/v1/P14-1146
Zhang, X., Zhao, J.J., LeCun, Y.: Character-level convolutional networks for text classification. CoRR arXiv:abs/1509.01626 (2015)
Acknowledgments
This research is conducted within the Committee of Science of the Ministry of Education and Science of the Republic of Kazakhstan under the grant number AP09260670 “Development of methods and algorithms for augmentation of input data for modifying vector embeddings of words.”
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sagingaliyev, B., Aitakhunova, Z., Shaimerdenova, A., Akhmetov, I., Pak, A., Jaxylykova, A. (2022). A Bibliometric Review of Methods and Algorithms for Generating Corpora for Learning Vector Word Embeddings. In: Pichardo Lagunas, O., Martínez-Miranda, J., Martínez Seis, B. (eds) Advances in Computational Intelligence. MICAI 2022. Lecture Notes in Computer Science(), vol 13613. Springer, Cham. https://doi.org/10.1007/978-3-031-19496-2_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-19496-2_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-19495-5
Online ISBN: 978-3-031-19496-2
eBook Packages: Computer ScienceComputer Science (R0)