Skip to main content

Semantic Label Representations with Lbl2Vec: A Similarity-Based Approach for Unsupervised Text Classification

  • Conference paper
  • First Online:
Web Information Systems and Technologies (WEBIST 2020, WEBIST 2021)

Abstract

In this paper, we evaluate the Lbl2Vec approach for unsupervised text document classification. Lbl2Vec requires only a small number of keywords describing the respective classes to create semantic label representations. For classification, Lbl2Vec uses cosine similarities between label and document representations, but no annotation information. We show that Lbl2Vec significantly outperforms common unsupervised text classification approaches and a widely used zero-shot text classification approach. Furthermore, we show that using more precise keywords can significantly improve the classification results of similarity-based text classification approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/sebischair/Lbl2Vec.

References

  1. Braun, D., Klymenko, O., Schopf, T., Kaan Akan, Y., Matthes, F.: The language of engineering: training a domain-specific word embedding model for engineering. In: 2021 3rd International Conference on Management Science and Industrial Engineering, MSIE 2021, pp. 8–12. Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3460824.3460826

  2. Breunig, M.M., Kriegel, H.P., Ng, R.T., Sander, J.: LoF: identifying density-based local outliers. In: Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, SIGMOD 2000, pp. 93–104. Association for Computing Machinery, New York (2000). https://doi.org/10.1145/342009.335388

  3. Chang, M.W., Ratinov, L.A., Roth, D., Srikumar, V.: Importance of semantic representation: dataless classification. In: AAAI, pp. 830–835 (2008). https://www.aaai.org/Library/AAAI/2008/aaai08-132.php

  4. Chen, X., Xia, Y., Jin, P., Carroll, J.: Dataless text classification with descriptive LDA. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI 2015, pp. 2224–2231. AAAI Press (2015). https://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/9524

  5. Deerwester, S.C., Dumais, S.T., Landauer, T.K., Furnas, G.W., Harshman, R.A.: Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41, 391–407 (1990). https://cis.temple.edu/vasilis/Courses/CIS750/Papers/deerwester90indexing_9.pdf

  6. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171–4186. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/N19-1423

  7. Gabrilovich, E., Markovitch, S.: Computing semantic relatedness using Wikipedia-based explicit semantic analysis. In: Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI 2007, San Francisco, CA, USA, pp. 1606–1611. Morgan Kaufmann Publishers Inc. (2007). https://www.ijcai.org/Proceedings/07/Papers/259.pdf

  8. Haj-Yahia, Z., Sieg, A., Deleris, L.A.: Towards unsupervised text classification leveraging experts and word embeddings. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp. 371–379. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/P19-1036. https://aclanthology.org/P19-1036

  9. Lang, K.: Newsweeder: learning to filter netnews. In: Proceedings of the 12th International Machine Learning Conference (ML 1995) (1995)

    Google Scholar 

  10. Le, Q., Mikolov, T.: Distributed representations of sentences and documents. In: Xing, E.P., Jebara, T. (eds.) Proceedings of the 31st International Conference on Machine Learning. Proceedings of Machine Learning Research, Bejing, China, vol. 32, pp. 1188–1196. PMLR (2014). https://proceedings.mlr.press/v32/le14.html

  11. Li, Y., Zheng, R., Tian, T., Hu, Z., Iyer, R., Sycara, K.: Joint embedding of hierarchical categories and entities for concept categorization and dataless classification. In: Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, Osaka, Japan, pp. 2678–2688. The COLING 2016 Organizing Committee (2016). https://aclanthology.org/C16-1252

  12. Meng, Y., et al.: Text classification using label names only: a language model self-training approach. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 9006–9017. Association for Computational Linguistics (2020). https://doi.org/10.18653/v1/2020.emnlp-main.724

  13. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 26. Curran Associates, Inc. (2013). https://proceedings.neurips.cc/paper/2013/file/9aa42b31882ec039965f3c4923ce901b-Paper.pdf

  14. Nam, J., Mencía, E.L., Fürnkranz, J.: All-in text: learning document, label, and word representations jointly. In: AAAI Conference on Artificial Intelligence (2016). https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/12058

  15. Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using Siamese BERT-networks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3982–3992. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/D19-1410. https://aclanthology.org/D19-1410

  16. Sappadla, P.V., Nam, J., Mencia, E.L., Fürnkranz, J.: Using semantic similarity for multi-label zero-shot classification of text documents. In: Proceedings of European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (2016). https://www.esann.org/sites/default/files/proceedings/legacy/es2016-174.pdf

  17. Schneider, P., Schopf, T., Vladika, J., Galkin, M., Simperl, E., Matthes, F.: A decade of knowledge graphs in natural language processing: a survey. In: Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, pp. 601–614. Association for Computational Linguistics (2022). https://aclanthology.org/2022.aacl-main.46

  18. Schopf, T., Braun, D., Matthes, F.: Lbl2Vec: an embedding-based approach for unsupervised document retrieval on predefined topics. In: Proceedings of the 17th International Conference on Web Information Systems and Technologies - WEBIST, pp. 124–132. INSTICC, SciTePress (2021). https://doi.org/10.5220/0010710300003058

  19. Schopf, T., Braun, D., Matthes, F.: Lbl2Vec github repository (2021). https://github.com/sebischair/Lbl2Vec

  20. Schopf, T., Braun, D., Matthes, F.: Evaluating unsupervised text classification: zero-shot and similarity-based approaches. In: 2022 6th International Conference on Natural Language Processing and Information Retrieval (NLPIR), NLPIR 2022. Association for Computing Machinery, New York (2023)

    Google Scholar 

  21. Schopf, T., Klimek, S., Matthes, F.: Patternrank: leveraging pretrained language models and part of speech for unsupervised keyphrase extraction. In: Proceedings of the 14th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management - KDIR, pp. 243–248. INSTICC, SciTePress (2022). https://doi.org/10.5220/0011546600003335

  22. Schopf, T., Weinberger, P., Kinkeldei, T., Matthes, F.: Towards bilingual word embedding models for engineering. In: 2022 4th International Conference on Management Science and Industrial Engineering, MSIE 2022. Association for Computing Machinery, New York (2022). https://doi.org/10.1145/3535782.3535835

  23. Song, Y., Roth, D.: On dataless hierarchical text classification. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28, no. 1 (2014). https://ojs.aaai.org/index.php/AAAI/article/view/8938

  24. Song, Y., Upadhyay, S., Peng, H., Roth, D.: Cross-lingual dataless classification for many languages. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, pp. 2901–2907. AAAI Press (2016). https://www.ijcai.org/Proceedings/16/Papers/412.pdf

  25. Stammbach, D., Ash, E.: DocSCAN: unsupervised text classification via learning from neighbors. arXiv abs/2105.04024 (2021). https://arxiv.org/abs/2105.04024

  26. Wang, W., Zheng, V.W., Yu, H., Miao, C.: A survey of zero-shot learning: settings, methods, and applications. ACM Trans. Intell. Syst. Technol. 10(2) (2019). https://doi.org/10.1145/3293318

  27. Williams, A., Nangia, N., Bowman, S.: A broad-coverage challenge corpus for sentence understanding through inference. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, Louisiana, pp. 1112–1122. Association for Computational Linguistics (2018). https://doi.org/10.18653/v1/N18-1101. https://aclanthology.org/N18-1101

  28. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., Le, Q.V.: XLNet: Generalized Autoregressive Pretraining for Language Understanding. Curran Associates Inc., Red Hook (2019)

    Google Scholar 

  29. Ye, Z., et al.: Zero-shot text classification via reinforced self-training. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3014–3024. Association for Computational Linguistics (2020). https://doi.org/10.18653/v1/2020.acl-main.272. https://aclanthology.org/2020.acl-main.272

  30. Yin, W., Hay, J., Roth, D.: Benchmarking zero-shot text classification: datasets, evaluation and entailment approach. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, pp. 3914–3923. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/D19-1404. https://aclanthology.org/D19-1404

  31. Zhang, J., Lertvittayakumjorn, P., Guo, Y.: Integrating semantic knowledge to tackle zero-shot text classification. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 1031–1040. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/N19-1108. https://aclanthology.org/N19-1108

  32. Zhang, X., Zhao, J., LeCun, Y.: Character-level convolutional networks for text classification. In: Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28. Curran Associates, Inc. (2015). https://proceedings.neurips.cc/paper/2015/file/250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf

  33. Zhang, Y., Meng, Y., Huang, J., Xu, F.F., Wang, X., Han, J.: Minimally Supervised Categorization of Text with Metadata, pp. 1231–1240. Association for Computing Machinery, New York (2020). https://doi.org/10.1145/3397271.3401168

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tim Schopf .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Schopf, T., Braun, D., Matthes, F. (2023). Semantic Label Representations with Lbl2Vec: A Similarity-Based Approach for Unsupervised Text Classification. In: Marchiori, M., Domínguez Mayo, F.J., Filipe, J. (eds) Web Information Systems and Technologies. WEBIST WEBIST 2020 2021. Lecture Notes in Business Information Processing, vol 469. Springer, Cham. https://doi.org/10.1007/978-3-031-24197-0_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-24197-0_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-24196-3

  • Online ISBN: 978-3-031-24197-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics