Abstract
The continuous expansion of textual data collection and dissemination in electronic means has made text classification a crucial task to help exploit, in a variety of applications, massive amounts of digital texts available nowadays. Knowledge Graphs (KGs) or their embeddings can provide additional semantics to improve text classification. However, most proposals from the literature rely solely on words found in the texts to classify them. A few text classification approaches employ knowledge embeddings besides word embeddings, but which are produced separately and not integrated into the same vector space. Different from previous proposals, this work applies an existing solution for generating text and knowledge embeddings in an integrated way to feed neural classifiers. Experiments using these joint embeddings with 50 dimensions yielded results comparable to those of state-of-the-art approaches on the AG News dataset and slightly superior to the BBC news dataset.
Supported by Foundation for Research Support of Santa Catarina (FAPESC), the Print CAPES-UFSC Automation 4.0 Project, and the Brazilian National Laboratory for Scientific Computing (LNCC).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
References
Auer, S., Bizer, C., Kobilarov, G., Lehmann, J., Cyganiak, R., Ives, Z.: DBpedia: a nucleus for a web of open data. In: Aberer, K., et al. (eds.) ASWC/ISWC -2007. LNCS, vol. 4825, pp. 722–735. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-76298-0_52
Bengio, Y., Ducharme, R., Vincent, P., Jauvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. (3)(Feb), 1137–1155 (2003)
Boehmke, B., Jodrey, J.: UC business analytics R programming guide (2018). https://github.com/uc-r/uc-r.github.io
Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. Trans. Assoc. Comput. Linguist. 5, 135–146 (2017)
Deng, X., Li, Y., Weng, J., Zhang, J.: Feature selection for text classification: a review. Multim. Tools Appl. 78(3), 3797–3816 (2019)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Conference of the North American Chapter of the ACL, pp. 4171–4186. Association for Computational Linguistics (ACL), June 2019
Fabian, M., Gjergji, K., Gerhard, W.: YAGO: a core of semantic knowledge unifying WordNet and Wikipedia. In: 16th International World Wide Web Conference on World Wide Web, pp. 697–706 (2007)
Joulin, A., Grave, E., Bojanowski, P., Douze, M., Jégou, H., Mikolov, T.: Fasttext.zip: compressing text classification models. arXiv preprint arXiv:1612.03651 (2016)
Joulin, A., Grave, E., Bojanowski, P., Mikolov, T.: Bag of tricks for efficient text classification. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. pp. 427–431. Association for Computational Linguistics, April 2017
Joulin, A., Grave, E., Bojanowski, P., Nickel, M., Mikolov, T.: Fast linear model for knowledge graph embeddings. arXiv preprint arXiv:1710.10881 (2017)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Lee, S., Lee, D., Yu, H.: OoMMix:out-of-manifold regularization in contextual embedding space for text classification. In: 59th Annual Meeting of the ACL and the 11th International Conference on Joint Conference on Natural Language Processing, pp. 590–599. Association for Computational Linguistics (ACL) (2021)
Lehmann, J., et al.: DBpedia - a crystallization point for the web of data. J. Web Seman. 7(3), 154–165 (2009)
Lenc, L., Král, P.: Word embeddings for multi-label document classification. In: International Conference on Recent Advances in Natural Language Processing, RANLP 2017, pp. 431–437. INCOMA Ltd., Varna, Bulgaria , September 2017
Lewis, D., et al.: Reuters-21578. Test Collect. 1, 19 (1987)
Li, Y., Yang, T.: Word embedding for understanding natural language: a survey. In: Srinivasan, S. (ed.) Guide to Big Data Applications. SBD, vol. 26, pp. 83–104. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-53817-4_4
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)
Moro, A., Raganato, A., Navigli, R.: Entity linking meets Word Sense disambiguation: a unified approach. Trans. Assoc. Comput. Linguist. 2, 231–244 (2014)
Nickel, M., Murphy, K., Tresp, V., Gabrilovich, E.: A review of relational machine learning for knowledge graphs. Proc. IEEE 104(1), 11–33 (2016)
Pennington, J., Socher, R., Manning, C.: GloVe: global vectors for word representation. In: 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)
Pittaras, N., Giannakopoulos, G., Papadakis, G., Karkaletsis, V.: Text classification with semantically enriched word embeddings. Nat. Lang. Eng. 27(4), 391–425 (2021)
Rydning, D.R.J.G.J., Reinsel, J., Gantz, J.: The Digitization of the World from Edge to Core. Framingham: International Data Corporation 16 (2018)
Sinoara, R.A., Camacho-Collados, J., Rossi, R.G., Navigli, R., Rezende, S.O.: Knowledge-enhanced document embeddings for text classification. Knowl.-Based Syst. 163, 955–971 (2019)
Wang, Q., Mao, Z., Wang, B., Guo, L.: Knowledge graph embedding: a survey of approaches and applications. IEEE Trans. Knowl. Data Eng. 29(12), 2724–2743 (2017)
Zha, D., Li, C.: Multi-label dataless text classification with topic modeling. Knowl. Inf. Syst. 61(1), 137–160 (2019)
Zhang, C., Yamana, H.: Improving text classification using knowledge in labels. In: 2021 IEEE 6th International Conference on Big Data Analytics (ICBDA), pp. 193–197 (2021)
Zhang, J., Lertvittayakumjorn, P., Guo, Y.: Integrating semantic knowledge to tackle zero-shot text classification. In: 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 1031–1040. Association for Computational Linguistics, Minneapolis, Minnesota (2019)
Zhong, Y., Zhang, Z., Zhang, W., Zhu, J.: BERT-KG: a short text classification model based onKnowledge graph and deep semantics. In: Wang, L., Feng, Y., Hong, Yu., He, R. (eds.) NLPCC 2021. LNCS (LNAI), vol. 13028, pp. 721–733. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88480-2_58
Acknowledgements
This study was Supported by Foundation for Research Support of Santa Catarina, Fundação de Amparo à Pesquisa e Inovação do Estado de Santa Catarina (FAPESC), the Print CAPES-UFSC Automation 4.0 Project, and the Brazilian National Laboratory for Scientific Computing (LNCC).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
da Costa, L.S., Oliveira, I.L., Fileto, R. (2022). A Neural Network Approach for Text Classification Using Low Dimensional Joint Embeddings of Words and Knowledge. In: Pardede, E., Delir Haghighi, P., Khalil, I., Kotsis, G. (eds) Information Integration and Web Intelligence. iiWAS 2022. Lecture Notes in Computer Science, vol 13635. Springer, Cham. https://doi.org/10.1007/978-3-031-21047-1_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-21047-1_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-21046-4
Online ISBN: 978-3-031-21047-1
eBook Packages: Computer ScienceComputer Science (R0)