skip to main content
research-article

A Multi-Classification Sentiment Analysis Model of Chinese Short Text Based on Gated Linear Units and Attention Mechanism

Authors Info & Claims
Published:20 September 2021Publication History
Skip Abstract Section

Abstract

Sentiment analysis of social media texts has become a research hotspot in information processing. Sentiment analysis methods based on the combination of machine learning and sentiment lexicon need to select features. Selected emotional features are often subjective, which can easily lead to overfitted models and poor generalization ability. Sentiment analysis models based on deep learning can automatically extract effective text emotional features, which will greatly improve the accuracy of text sentiment analysis. However, due to the lack of a multi-classification emotional corpus, it cannot accurately express the emotional polarity. Therefore, we propose a multi-classification sentiment analysis model, GLU-RCNN, based on Gated Linear Units and attention mechanism. Our model uses the Gated Linear Units based attention mechanism to integrate the local features extracted by CNN with the semantic features extracted by the LSTM. The local features of short text are extracted and concatenated by using multi-size convolution kernels. At the classification layer, the emotional features extracted by CNN and LSTM are respectively concatenated to express the emotional features of the text. The detailed evaluation on two benchmark datasets shows that the proposed model outperforms state-of-the-art approaches.

References

  1. Fatemeh Hemmatian and Mohammad Karim Sohrabi. 2017. A survey on classification techniques for opinion mining and sentiment analysis. Artificial Intelligence Review 52 (2017), 1495–1545.Google ScholarGoogle ScholarCross RefCross Ref
  2. Lei Zhang, Shuai Wang, and Bing Liu. 2018. Deep learning for sentiment analysis: A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 8, 4 (2018), e1253.Google ScholarGoogle ScholarCross RefCross Ref
  3. K. Dashtipour, S. Poria, A. Hussain, E. Cambria, A. Y. Hawalah, A. Gelbukh, and Q. Zhou. 2016. Multilingual sentiment analysis: State of the art and independent comparison of techniques. Cognitive Computation 8, 4 (2016), 757–771.Google ScholarGoogle ScholarCross RefCross Ref
  4. Peter D. Turney. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In Proceedings of Annual Meeting of the Association for Computational Linguistics. 417–424. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Yuan Ren, Wenhan Chao, Qing Zhou, and Zhou-Jun Li. 2013. Sentiment analysis of Chinese microblog using topic self-adaptation. Computer Science 40, 11 (2013), 231–235.Google ScholarGoogle Scholar
  6. C. O. Alm, D. Roth, and R. Sproat. 2005. Emotions from text: Machine learning for text-based emotion prediction. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing. 579–586. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. Wang and C. D. Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. 90–94. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. M. Jabreel and A. Moreno. 2018. EiTAKA at SemEval-2018 Task 1: An ensemble of n-channels ConvNet and XGboost regressors for emotion analysis of tweets. In Proceedings of the 12th International Workshop on Semantic Evaluation. 193–199.Google ScholarGoogle Scholar
  9. O. Habimana, Y. Li, R. Li, Xiwu Gu, and Ge Yu. 2020. Sentiment analysis using deep learning approaches: An overview. Science China Information Sciences 63 (2020), 111102. https://doi.org/10.1007/s11432-018-9941-6Google ScholarGoogle ScholarCross RefCross Ref
  10. Wenpeng Yin, Katharina Kann, Mo Yu, and Hinrich Schütze. 2017. Comparative study of CNN and RNN for natural language processing. arXiv:1702.01923.Google ScholarGoogle Scholar
  11. Y. Q. Wang, S. Feng, D. L. Wang, G. Yu, and Y. Zhang. 2016. Multi-label Chinese microblog emotion classification via convolutional neural network. In Proceedings of the Asia-Pacific Web Conference. 567–580.Google ScholarGoogle Scholar
  12. Jumayel Islam, Robert E. Mercer, and Lu Xiao. 2019. Multi-channel convolutional neural network for Twitter emotion and sentiment recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 1355–1365.Google ScholarGoogle ScholarCross RefCross Ref
  13. Jun Liang, Yumei Chai, Huibin Yuan, Hongying Zan, and Ming Liu. 2014. Deep learning for Chinese micro-blog sentiment analysis. Journal of Chinese Information Processing 28, 5 (2014), 155–161.Google ScholarGoogle Scholar
  14. Muhammad Abdul-Mageed and Lyle Ungar. 2017. EmoNet: Fine-grained emotion detection with gated recurrent neural networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics.Google ScholarGoogle ScholarCross RefCross Ref
  15. B. Felbo, A. Mislove, and A. Saard. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 1615–1625.Google ScholarGoogle Scholar
  16. J. F. Yu, L. Marujo, J. Jiang, P. Karuturi, and W. Brendel. 2018. Improving multi-label emotion classification via sentiment classification with dual attention transfer network. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 1097–1102.Google ScholarGoogle Scholar
  17. Yanghoon Kim, Hwanhee Lee, and Kyomin Jung. 2018. AttnConvNet at SemEval-2018 task 1: Attention based convolutional neural networks for multilabel emotion classification. In Proceedings of SemEval@NAACL-HLT. 141–145.Google ScholarGoogle ScholarCross RefCross Ref
  18. A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. 142–150. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. H. Fei, Y. Zhang, Y. Ren, and D. Ji. 2020. Latent emotion memory for multi-label emotion classification. Proceedings of the AAAI Conference on Artificial Intelligence 34, 5 (2020), 7692–07699.Google ScholarGoogle Scholar
  20. Xingyou Wang, Weijie Jiang, and Zhiyong Luo. 2016. Combination of convolutional and recurrent neural network for sentiment analysis of short texts. In Proceedings of the 26th International Conference on Computational Linguistics. 2428–2437.Google ScholarGoogle Scholar
  21. J. Wang, L. C. Yu, K. R. Lai, and X. Zhang. 2016. Dimensional sentiment analysis using a regional CNN-LSTM model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers).Google ScholarGoogle Scholar
  22. M. E. Basiri, S. Nemati, M. Abdar, E. Cambria, and U. R. Acharya. 2021. ABCDM: An attention-based bidirectional CNN-RNN deep model for sentiment analysis. Future Generation Computer Systems 115 (2021), 279–294.Google ScholarGoogle ScholarCross RefCross Ref
  23. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  24. M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2227–2237.Google ScholarGoogle Scholar
  25. Y. N. Dauphin, A. Fan, M. Auli, and D. Grangier. 2017. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. CSDN. 2018. Joy, Anger, Disgust and Low Microblog Tagged Dataset. Retrieved June 17, 2021 from https://download.csdn.net/download/weixin_35085773/10855039.Google ScholarGoogle Scholar
  27. Hongfei Lin. 2017. Emotional Subject Lexicon. DUTIR.Google ScholarGoogle Scholar
  28. Mei Jiaju and Zhu Yiming. 1996. TongYiCi CiLin. Shanghai Lexicographical Publishing House.Google ScholarGoogle Scholar
  29. Shen Li, Zhe Zhao, Renfen Hu, Wensi Li, Tao Liu, and Xiaoyong Du. 2018. Analogical reasoning on Chinese morphological and semantic relations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.Google ScholarGoogle ScholarCross RefCross Ref
  30. N. A. Samat, M. N. M. Salleh, and H. Ali. 2020. The comparison of pooling functions in convolutional neural network for sentiment analysis task. In Recent Advances on Soft Computing and Data Mining. Advances in Intelligent Systems and Computing, Vol. 978. Springer, 202–210.Google ScholarGoogle Scholar
  31. Y. Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing.Google ScholarGoogle ScholarCross RefCross Ref
  32. J. Kleenankandy and N. K. A. Abdul. 2020. An enhanced Tree-LSTM architecture for sentence semantic modeling using typed dependencies. Information Processing & Management 57, 6 (2020), 102362.Google ScholarGoogle ScholarCross RefCross Ref
  33. A. H. Ombabi, W. Ouarda, and A. M. Alimi. 2020. Deep learning CNN–LSTM framework for Arabic sentiment analysis using textual information shared in social networks. Social Network Analysis and Mining 10 (2020), Article 53.Google ScholarGoogle Scholar
  34. J. Devlin, M. W. Chang, K. Lee, and K. Toutanova. 201. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 4171–4186.Google ScholarGoogle Scholar
  35. S. Rendle, W. Krichene, L. Zhang, and J. Anderson. 2005. Neural collaborative filtering vs. matrix factorization revisited. arXiv:2005.09683. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. 2018. Improving language understanding by generative pre-training. Retrieved June 17, 2021 from https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.Google ScholarGoogle Scholar

Index Terms

  1. A Multi-Classification Sentiment Analysis Model of Chinese Short Text Based on Gated Linear Units and Attention Mechanism

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Asian and Low-Resource Language Information Processing
      ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 20, Issue 6
      November 2021
      439 pages
      ISSN:2375-4699
      EISSN:2375-4702
      DOI:10.1145/3476127
      Issue’s Table of Contents

      Copyright © 2021 Association for Computing Machinery.

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 20 September 2021
      • Accepted: 1 May 2021
      • Revised: 1 March 2021
      • Received: 1 March 2020
      Published in tallip Volume 20, Issue 6

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format