Skip to main content

EK-BERT: An Enhanced K-BERT Model for Chinese Sentiment Analysis

  • Conference paper
  • First Online:
Knowledge Graph and Semantic Computing: Knowledge Graph Empowers the Digital Economy (CCKS 2022)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1669))

Included in the following conference series:

  • 695 Accesses

Abstract

Pre-trained language models (PLMs), such as BERT, have achieved good results on many natural language processing (NLP) tasks. Recently, some studies have attempted to integrate factual knowledge into PLMs for adapting to various downstream tasks. For sentiment analysis task, sentiment knowledge largely helps determine the sentiment tendencies of texts, such as sentiment words. For Chinese sentiment analysis, historical stories and fables give richer connotations and more complex emotions to words, which makes sentiment knowledge injection more necessary. But clearly, this knowledge has not been fully considered. In this paper, we propose EK-BERT, an Enhanced K-BERT model for Chinese sentiment analysis, which is based on the K-BERT model and utilizes sentiment knowledge graph to achieve better results on sentiment analysis task. In order to construct a high-quality sentiment knowledge graph, we collect a large number of emotional words by combining several existing emotional dictionaries. Moreover, in order to understand texts better, we enhance local attention through syntactic analysis to make EK-BERT pay more attention to syntactically relevant words. EK-BERT is compatible with BERT and existing structural knowledge. Experimental results show that our proposed EK-BERT achieves better performance on Chinese sentiment analysis task.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/DUTIR-Emotion-Group/CCL2018-Chinese-Metaphor-Analysis.

  2. 2.

    https://blog.csdn.net/u013733326/article/details/105621880.

References

  1. Devlin, J., Chang, M. W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. NAACL-HLT, 4171–4186 (2019)

    Google Scholar 

  2. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. CoRR abs/1907.11692 (2019)

    Google Scholar 

  3. Ashish, V., eta l.: Attention is all you need. In: NIPS, pp. 5998–6008 (2017)

    Google Scholar 

  4. Tejwani, R.: Sentiment analysis: a survey. CoRR abs/1405.2584 (2014)

    Google Scholar 

  5. Petroni, F., et al.: Language models as knowledge bases? In: EMNLP/IJCNLP, vol. 1, pp. 2463–2473 (2019)

    Google Scholar 

  6. Liu, W., et al.: K-BERT: enabling language representation with knowledge graph. In: AAAI, pp. 2901–2908 (2020)

    Google Scholar 

  7. Wang, R.: K-Adapter: infusing knowledge into pre-trained models with adapters. In: ACL/IJCNLP (Findings), pp. 1405–1418 (2021)

    Google Scholar 

  8. Wang, X.: KEPLER: a unified model for knowledge embedding and pre-trained language representation. Trans. Assoc. Comput. Linguist. 9, 176–194 (2021)

    Article  Google Scholar 

  9. Tian, H., et al.: SKEP: sentiment knowledge enhanced pre-training for sentiment analysis. In: ACL, pp. 4067–4076 (2020)

    Google Scholar 

  10. Strubell, E., Verga, P., Andor, D., Weiss, D., McCallum, A.: Linguistically-informed self-attention for semantic role labeling. In: EMNLP, pp. 5027–5038 (2018)

    Google Scholar 

  11. Zhang, Z., Wu, Y., Zhou, J., Duan, S., Zhao, H., Wang, R.: SG-Net: syntax-guided machine reading comprehension. In: AAAI, pp. 9636–9643 (2020)

    Google Scholar 

  12. Hatzivassiloglou, V., McKeown, K.R.: Predicting the semantic orientation of adjectives. In: ACL, pp. 174–181 (1997)

    Google Scholar 

  13. Tsai, A.C.R., Wu, C.E., Tsai, R.T.H., Hsu, J.Y.J.: Building a concept-level sentiment dictionary based on commonsense knowledge. IEEE Intell. Syst. 28(2), 22–30 (2013)

    Article  Google Scholar 

  14. Yang, X.P., et al.: Automatic construction and optimization of sentiment lexicon based on Word2Vec. Comput. Sci. 44(1), 42–47 (2017)

    Google Scholar 

  15. Wang, S., Manning, C.D.: Baselines and bigrams: simple, good sentiment and topic classification. In: ACL, vol. 2, pp. 90–946 (2012)

    Google Scholar 

  16. Arulmurugan, R., Sabarmathi, K.R., Anandakumar, H.: Classification of sentence level sentiment analysis using cloud machine learning techniques. Clust. Comput. 22(1), 1199–1209 (2017). https://doi.org/10.1007/s10586-017-1200-1

    Article  Google Scholar 

  17. Kim, J., Kim, M.H.: An evaluation of passage-based text categorization. J. Intell. Inf. Sys. 23, 47–65 (2004). https://doi.org/10.1023/B:JIIS.0000029670.53363.d0

    Article  MathSciNet  MATH  Google Scholar 

  18. Zhang, Z., Han, X., Liu, Z., Jiang, X., Sun, M., Liu, Q.: ERNIE: enhanced language representation with informative entities. In: ACL, pp. 1441–1451 (2019)

    Google Scholar 

  19. Sun, T.: CoLAKE: contextualized language and knowledge embedding. In: COLING, pp. 3660–3670 (2020)

    Google Scholar 

  20. Nguyen, T.T., Nguyen, X.P., Joty, S., Li, X.: Differentiable window for dynamic local attention. In: ACL, pp. 6589–6599 (2020)

    Google Scholar 

  21. Xu, L., Lin, H., Pan, Y., Ren, H., Chen, J.: Constructing the affective lexicon ontology. J. China Soc. Sci. Techn. Inf. 27(2), 180–185 (2008)

    Google Scholar 

  22. Ku, L.W., Liang, Y.-T., Chen, H.-H.: Opinion extraction, summarization and tracking in news and blog corpora. In: AAAI Spring Symposium: Computational Approaches to Analyzing Weblogs, pp. 100–107 (2006)

    Google Scholar 

  23. Dong, Z., Dong, Q.: HowNet and the Computation of Meaning. World Scientific, Singapore (2006)

    Book  Google Scholar 

  24. THUNLP Group Homepage. http://114.215.64.60:8094/site2/index.php/. Accessed 4 May 2022

  25. Li, Z., Zhou, Q., Li, C., Xu, K., Cao, Y.: Improving BERT with syntax-aware local attention. In: ACL/IJCNLP (Findings), pp. 645–653 (2021)

    Google Scholar 

  26. Dong, Z., Dong, Q., Hao, C.: HowNet and its computation of meaning. In: COLING (Demos), pp. 53–56 (2010)

    Google Scholar 

  27. Xu, B., et al.: CN-DBpedia: a never-ending chinese knowledge extraction system. In: Benferhat, S., Tabia, K., Ali, M. (eds.) Advances in Artificial Intelligence: From Theory to Practice. Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence), vol. 10351, pp. 428–438. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60045-1_44

    Chapter  Google Scholar 

Download references

Acknowledgement

We thank the anonymous reviewers. The work is supported by Natural Science Foundation of China (62172086, 61872074, 62272092).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daling Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bai, H., Wang, D., Feng, S., Zhang, Y. (2022). EK-BERT: An Enhanced K-BERT Model for Chinese Sentiment Analysis. In: Sun, M., et al. Knowledge Graph and Semantic Computing: Knowledge Graph Empowers the Digital Economy. CCKS 2022. Communications in Computer and Information Science, vol 1669. Springer, Singapore. https://doi.org/10.1007/978-981-19-7596-7_11

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-7596-7_11

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-7595-0

  • Online ISBN: 978-981-19-7596-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics