Skip to main content

A Relational Classification Network Integrating Multi-scale Semantic Features

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14303))

  • 1345 Accesses

Abstract

Among many natural language processing tasks, the task of relation classification is an important basic work. Relational classification is required to provide high-quality corpus in fields such as machine translation, structured data generation, knowledge graphs, and semantic question answering. Existing relational classification models include models based on traditional machine learning, models based on deep learning, and models based on attention mechanisms. The above-mentioned models all have the disadvantage of only using a single feature for relationship classification, and do not effectively combine entity features and context features. Based on the above problems, this study proposes a relational classification network that integrates multi-scale semantic features. Through the combination of entity features and context semantic features, the model can better learn the relationship features between entities. In addition, in order to verify the validity of the model and the effectiveness of each module, this study conducted experiments on the SemEval-2010 Task 8 and KBP37 data sets. Experimental results demonstrate that the model performance is higher than most existing models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Kužina, V., Petric, A.M., Barišić, M., et al.: CASSED: context-based approach for structured sensitive data detection. Expert Syst. Appl. 223, 119924 (2023)

    Article  Google Scholar 

  2. Ranathunga, S., Lee, E.S.A., Prifti Skenduli, M., et al.: Neural machine translation for low-resource languages: a survey. ACM Comput. Surv. 55(11), 1–37 (2023)

    Article  Google Scholar 

  3. Ryen, V., Soylu, A., Roman, D.: Building semantic knowledge graphs from (semi-) structured data: a review. Future Internet 14(5), 129 (2022)

    Article  Google Scholar 

  4. Bin, H., Yue, K., Linhao, F.: Knowledge modeling and association Q &A for policy texts. Data Anal. Knowl. Discov. 6(11), 79–92 (2023)

    Google Scholar 

  5. Wang, H., Qin, K., Zakari, R.Y., et al.: Deep neural network-based relation extraction: an overview. Neural Comput. Appl. 1–21 (2022)

    Google Scholar 

  6. Mikolov, T., Sutskever, I., Chen, K., et al.: Distributed representations of words and phrases and their compositionality. IN: Advances in Neural Information Processing Systems, vol. 26 (2013)

    Google Scholar 

  7. Peters, M.E., Neumann, M., Iyyer, M., et al.: Deep contextualized word representations. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2227–2237 (2018)

    Google Scholar 

  8. Radford, A., Narasimhan, K., Salimans, T., et al.: Improving language understanding by generative pre-training (2018)

    Google Scholar 

  9. Devlin, J., Chang, M.W., Lee, K., et al.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  10. Zhang, Z., Han, X., Liu, Z., et al.: ERNIE: enhanced language representation with informative entities. arXiv preprint arXiv:1905.07129 (2019)

  11. Sun, Y., Wang, S., Li, Y., et al.: Ernie: enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223 (2019)

  12. Sun, Y., Wang, S., Li, Y., et al.: Ernie 2.0: a continual pre-training framework for language understanding. In: Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8968–8975 (2020)

    Google Scholar 

  13. Cui, Y., Che, W., Liu, T., et al.: Pre-training with whole word masking for Chinese BERT. IEEE/ACM Trans. Audio Speech Lang. Process. 29, 3504–3514 (2021)

    Article  Google Scholar 

  14. Liu, Y., Ott, M., Goyal, N., et al.: Roberta: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  15. Wei, J., Ren, X., Li, X., et al.: Nezha: neural contextualized representation for Chinese language understanding. arXiv preprint arXiv:1909.00204 (2019)

  16. Peters, M.E., Neumann, M., Logan IV, R.L., et al.: Knowledge enhanced contextual word representations. arXiv preprint arXiv:1909.04164 (2019)

Download references

Acknowledgements

This work was supported by National Key Research and Development Program of China (2022YFB4004401), and the Taishan Scholars Program (NO. tsqn202103097).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Min Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, G., Tian, J., Zhou, M., Li, M., Han, D. (2023). A Relational Classification Network Integrating Multi-scale Semantic Features. In: Liu, F., Duan, N., Xu, Q., Hong, Y. (eds) Natural Language Processing and Chinese Computing. NLPCC 2023. Lecture Notes in Computer Science(), vol 14303. Springer, Cham. https://doi.org/10.1007/978-3-031-44696-2_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44696-2_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44695-5

  • Online ISBN: 978-3-031-44696-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics