skip to main content
research-article

Event Graph Neural Network for Opinion Target Classification of Microblog Comments

Authors Info & Claims
Published:02 November 2021Publication History
Skip Abstract Section

Abstract

Opinion target classification of microblog comments is one of the most important tasks for public opinion analysis about an event. Due to the high cost of manual labeling, opinion target classification is generally considered as a weak-supervised task. This article attempts to address the opinion target classification of microblog comments through an event graph convolution network (EventGCN) in a weak-supervised manner. Specifically, we take microblog contents and comments as document nodes, and construct an event graph with three typical relationships of event microblogs, including the co-occurrence relationship of event keywords extracted from microblogs, the reply relationship of comments, and the document similarity. Finally, under the supervision of a small number of labels, both word features and comment features can be represented well to complete the classification. The experimental results on two event microblog datasets show that EventGCN can significantly improve the classification performance compared with other baseline models.

REFERENCES

  1. [1] Wang Sida I. and Manning Christopher D.. 2012. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). 9094. https://www.aclweb.org/anthology/P12-2018/.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Blei David M., Ng Andrew Y., and Jordan Michael I.. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research 3 (2003), 9931022. http://jmlr.org/papers/v3/blei03a.html.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Forman George. 2008. BNS feature scaling: An improved representation over TF-IDF for SVM text classification. In Proceedings of the 17th ACM Conference on Information and Knowledge Management (CIKM’08). ACM, New York, NY, 263270. DOI: DOI: http://dx.doi.org/10.1145/1458082.1458119Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Mikolov Tomás, Karafiát Martin, Burget Lukás, Cernocký Jan, and Khudanpur Sanjeev. 2010. Recurrent neural network based language model. In Proceedings of the 11th Annual Conference of the International Speech Communication Association (INTERSPEECH’10). 10451048. http://www.isca-speech.org/archive/interspeech_2010/i10_1045.html.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Liu Pengfei, Qiu Xipeng, and Huang Xuanjing. 2016. Recurrent neural network for text classification with multi-task learning. In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI’16). 28732879. http://www.ijcai.org/Abstract/16/408.Google ScholarGoogle Scholar
  6. [6] Kim Yoon. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP’14). 17461751. DOI: DOI: http://dx.doi.org/10.3115/v1/d14-1181Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Hao Ming, Xu Bo, Liang Jing-Yi, Zhang Bo-Wen, and Yin Xu-Cheng. 2020. Chinese short text classification with mutual-attention convolutional neural networks. ACM Transactions on Asian and Low-Resource Language Information Processing 19, 5 (2020), Article 61, 13 pages. https://dl.acm.org/doi/10.1145/3388970.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Peters Matthew E., Neumann Mark, Iyyer Mohit, Gardner Matt, Clark Christopher, Lee Kenton, and Zettlemoyer Luke. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT’18) (Volume 1: Long Papers). 22272237. DOI: DOI: http://dx.doi.org/10.18653/v1/n18-1202Google ScholarGoogle ScholarCross RefCross Ref
  9. [9] Devlin Jacob, Chang Ming-Wei, Lee Kenton, and Toutanova Kristina. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies (NAACL-HLT’19) (Volume 1: Long and Short Papers). 4171–4186. DOI: DOI: http://dx.doi.org/10.18653/v1/n19-1423Google ScholarGoogle Scholar
  10. [10] He Ruidan, Lee Wee Sun, Ng Hwee Tou, and Dahlmeier Daniel. 2017. An unsupervised neural attention model for aspect extraction. In Proceedings of the 55th Annual Meeting ofthe Association for Computational Linguistics (ACL’17) (Volume 1: Long Papers). 388–397. DOI: DOI: http://dx.doi.org/10.18653/v1/P17-1036Google ScholarGoogle Scholar
  11. [11] Angelidis Stefanos and Lapata Mirella. 2018. Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.36753686. DOI: DOI: http://dx.doi.org/10.18653/v1/d18-1403Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Chen Mingda, Tang Qingming, Livescu Karen, and Gimpel Kevin. 2018. Variational sequential labelers for semi-supervised learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 215226. DOI: DOI: http://dx.doi.org/10.18653/v1/d18-1020Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Gururangan Suchin, Dang Tam, Card Dallas, and Smith Noah A.. 2019. Variational pretraining for semi-supervised text classification. In Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL’19) (Volume 1: Long Papers). 58805894. DOI: DOI: http://dx.doi.org/10.18653/v1/p19-1590Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Defferrard Michaël, Bresson Xavier, and Vandergheynst Pierre. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems. 38373845. https://proceedings.neurips.cc/paper/2016/hash/04df4d434d481c5bb723be1b6df1ee65-Abstract.html.Google ScholarGoogle Scholar
  15. [15] Kipf Thomas N. and Welling Max. 2017. Semi-supervised classification with graph convolutional networks. In Proceedings of the 5th International Conference on Learning Representations (ICLR’17). https://openreview.net/forum?id=SJU4ayYgl.Google ScholarGoogle Scholar
  16. [16] Yao Liang, Mao Chengsheng, and Luo Yuan. 2019. Graph convolutional networks for text classification. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI’19). 7370–7377. DOI: DOI: http://dx.doi.org/10.1609/aaai.v33i01.33017370Google ScholarGoogle Scholar
  17. [17] Wu Shu, Tang Yuyuan, Zhu Yanqiao, Wang Liang, Xie Xing, and Tan Tieniu. 2019. Session-based recommendation with graph neural networks. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI’19). 346353. DOI: DOI: http://dx.doi.org/10.1609/aaai.v33i01.3301346Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Zhang Yufeng, Yu Xueli, Cui Zeyu, Wu Shu, Wen Zhongzhen, and Wang Liang. 2020. Every document owns its structure: Inductive text classification via graph neural networks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL’20). 334339. DOI: DOI: http://dx.doi.org/10.18653/v1/2020.acl-main.31Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Huang Lianzhe, Ma Dehong, Li Sujian, Zhang Xiaodong, and Wang Houfeng. 2019. Text level graph neural network for text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP’19). 34423448. DOI: DOI: http://dx.doi.org/10.18653/v1/D19-1345Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Velickovic Petar, Cucurull Guillem, Casanova Arantxa, Romero Adriana, Liò Pietro, and Bengio Yoshua. 2018. Graph attention networks. In Proceedings of the 6th International Conference on Learning Representations (ICLR’18). https://openreview.net/forum?id=rJXMpikCZ.Google ScholarGoogle Scholar
  21. [21] Hu Linmei, Yang Tianchi, Shi Chuan, Ji Houye, and Li Xiaoli. 2019. Heterogeneous graph attention networks for semi-supervised short text classification. In Proceedings ofthe 2019 Conference on Empirical Methods in Natural Language Processing and the 9thInternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP’19). 4820–4829. DOI: DOI: http://dx.doi.org/10.18653/v1/D19-1488Google ScholarGoogle Scholar
  22. [22] Mihalcea Rada and Tarau Paul. 2004. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP’04). 404411. https://www.aclweb.org/anthology/W04-3252/.Google ScholarGoogle Scholar
  23. [23] Chang Chih-Chung and Lin Chih-Jen. 2011. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology 2, 3 (2011), Article 27, 27 pages. DOI: DOI: http://dx.doi.org/10.1145/1961189.1961199Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Graves Alex, Jaitly Navdeep, and Mohamed AbdelRahman. 2013. Hybrid speech recognition with deep bidirectional LSTM. In Proceedings of the 2013 IEEE Workshop on Automatic Speech Recognition and Understanding. IEEE, Los Alamitos, CA, 273278. DOI: DOI: http://dx.doi.org/10.1109/ASRU.2013.6707742Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Joulin Armand, Grave Edouard, Bojanowski Piotr, and Mikolov Tomás. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL’17) (Volume 2: Short Papers). 427431. DOI: DOI: http://dx.doi.org/10.18653/v1/e17-2068Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Event Graph Neural Network for Opinion Target Classification of Microblog Comments

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Asian and Low-Resource Language Information Processing
        ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 21, Issue 1
        January 2022
        442 pages
        ISSN:2375-4699
        EISSN:2375-4702
        DOI:10.1145/3494068
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 2 November 2021
        • Accepted: 1 June 2021
        • Revised: 1 March 2021
        • Received: 1 October 2020
        Published in tallip Volume 21, Issue 1

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text

      HTML Format

      View this article in HTML Format .

      View HTML Format