skip to main content
10.1145/3639233.3639347acmotherconferencesArticle/Chapter ViewAbstractPublication PagesnlpirConference Proceedingsconference-collections
research-article

Named Entities Based on the BERT-BILSTM-ACRF Model Recognition Research

Published:05 March 2024Publication History

ABSTRACT

As a crucial first step in the process of constructing knowledge graph, the accuracy of named entity recognition determines the construction effect of the final graph. However, at present, Chinese named entity recognition methods still have many problems, such as long training time and lower accuracy. Hence, we come up with a BERT-BILSTM-ACRF entity recognition method that combines the “self-attention” mechanism. To begin with, Bert model is selected as the embedding layer, the text is vectorized, and the character position in-formation is obtained through the bidirectional Long Short-Term Memory network. Secondly, the internal relationship of the character sequence is further searched through the self-attention mechanism, and finally the final optimal sequence is decoded by the conditional random field model. To check the effectiveness of the BERT-BILSTM-ACRF model, the model is applied to the data set of the university course textbook” Da-ta Structure”, and the result reaches 98.97%F1 value and 98.14%accuracy, which has good experimental results and certain practical value.

References

  1. J. Li, A. Sun, “A survey on deep learning for named entity recognition,” Journal of Transactions on Knowledge and Data Engineering, vol. 34, no. 1, pp. 50-70, 2020.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. X. Liu, H. Chen, W. Xia, “Overview of named entity recognition,” Journal of Contemporary Educational Research, vol. 6, no. 5, pp. 65-68, 2022.Google ScholarGoogle ScholarCross RefCross Ref
  3. N. Yasuda, S.-i. Takagi, A. Toriumi, Spectral shape analysis of infrared absorption of thermally grown silicon dioxide films, Applied Surface Science 117-118(1997) 216-220.Google ScholarGoogle ScholarCross RefCross Ref
  4. E. Tome, B.-K Seljak, and P. Koroˇsec, “A rule-based named-entity recognition method for knowledge extraction of evidence-based dietary recommendations,” PloS one, vol. 12, no. 6, e0179488, 2017.Google ScholarGoogle Scholar
  5. P.-J. Gorinski, H.-H. Wu, C. Grover, R. Tobin, and C. Talbot, “Named Entity Recognition for Electronic Health Records: A Comparison of Rule-based and Machine Learning Approaches,” arXiv preprint, 2019. [Online]. Available: https://doi.org/10.48550/arXiv.1903.03985.Google ScholarGoogle ScholarCross RefCross Ref
  6. S.-J Ren, “Research on Named Entity Recognition of Drug Instructions Based on Conditional Random Fields,” Journal of Yangtze River Information and Communication, vol. 34, no. 10, pp. 232-234, 2021.Google ScholarGoogle Scholar
  7. K. Hakala, S. Pyysalo, “Biomedical named entity recognition with multilingual BERT,” in fifth workshop on BioNLP open shared tasks. IEEE, 2019, pp. 56–61.Google ScholarGoogle ScholarCross RefCross Ref
  8. C. Liang, Y. Yu, H -M. Jiang, S. Er, R -J. Wang, T. Zhao, “BOND: BERT-Assisted Open-Domain Named Entity Recognition with Distant Supervision,” in twenty-sixth ACM SIGKDD international conference on knowledge discovery & data mining. IEEE, 2020, pp. 1054–1064.Google ScholarGoogle Scholar
  9. Phopli W , Maliyaem M , Haruechaiyasak C ,et al.Microblog Entity Detection for Natural Disaster Management[J].Journal of Advances in Information Technology, 2021.DOI:10.12720/jait.12.4.351-356.Google ScholarGoogle ScholarCross RefCross Ref
  10. Y.-Y Zhu, G.-X Wang, and B.-F Karlsson, “CAN-NER: Convolutional Attention Network for Chinese Named Entity Recognition,” arXiv preprint, 2020. [Online]. Available: https://doi.org/10.48550/arXiv.1904.02141.Google ScholarGoogle ScholarCross RefCross Ref
  11. L. Tenney, D. Das, E.Pavlick “BERT rediscovers the classical NLP pipeline,” arXiv preprint, 2019. [Online]. Available: https://doi.org/10.48550/arXiv.1905.05950.Google ScholarGoogle ScholarCross RefCross Ref
  12. H. Choi, J. Kim, S -Joe. “Evaluation of bert and albert sentence embedding performance on downstream nlp,” in 2020 25th International conference on pattern recognition (ICPR). IEEE, 2021, pp. 5482–5487.Google ScholarGoogle ScholarCross RefCross Ref
  13. A -A. Aziz, M -N. Tihami, M -S. Islam. “A deep recurrent neural network with bilstm model for sentiment classification,” in 2018 International conference on Bangla speech and language processing (ICBSLP). IEEE, 2018, pp. 1–4.Google ScholarGoogle ScholarCross RefCross Ref
  14. Z. Hameed, B -G. Zapirain, F. She “Research on Named Entity Recognition of Chinese Electronic Medical Records Based on CNN-CRF,” Journal of Ieee Access, vol. 8, no. 1, pp. 73992-74001, 2020. Journal article with page number.Google ScholarGoogle ScholarCross RefCross Ref
  15. W. AlKhwiter, N -A. Twairesh, “Part-of-speech tagging for Arabic tweets using CRF and Bi-LSTM,” Journal of Computer Speech Language, vol. 65, no. 1, pp. 101138, 2021.Google ScholarGoogle ScholarCross RefCross Ref
  16. L -H. Lee, C -H. Lu, T -M. Lin, “NCUEE-NLP at SemEval-2022 Task 11: Chinese named entity recognition using the BERT-BiLSTM-CRF model,” in Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022). IEEE, 2022, pp. 1597–1602.Google ScholarGoogle ScholarCross RefCross Ref
  17. Z -Y. Niu, G -Q. Zhong, H. Yu, “A review on the attention mechanism of deep learning,” Journal of Neurocomputing, vol. 452, no. 1, pp. 48-62, 2021.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Named Entities Based on the BERT-BILSTM-ACRF Model Recognition Research
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        NLPIR '23: Proceedings of the 2023 7th International Conference on Natural Language Processing and Information Retrieval
        December 2023
        336 pages
        ISBN:9798400709227
        DOI:10.1145/3639233

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 5 March 2024

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited
      • Article Metrics

        • Downloads (Last 12 months)5
        • Downloads (Last 6 weeks)3

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format