Skip to main content
Log in

Improving aspect-level sentiment analysis with aspect extraction

  • S.I. : WorldCIST’20
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Aspect-based sentiment analysis (ABSA), a popular research area in NLP, has two distinct parts—aspect extraction (AE) and labelling the aspects with sentiment polarity (ALSA). Although distinct, these two tasks are highly correlated. The work primarily hypothesizes that transferring knowledge from a pre-trained AE model can benefit the performance of ALSA models. Based on this hypothesis, word embeddings are obtained during AE and, subsequently, feed that to the ALSA model. Empirically, this work shows that the added information significantly improves the performance of three different baseline ALSA models on two distinct domains. This improvement also translates well across domains between AE and ALSA tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. https://nlp.stanford.edu/projects/glove/.

  2. http://alt.qcri.org/semeval2014/task4/.

References

  1. Mohammad SM, Kiritchenko S (2013) NRC-Canada: Building the state-of-the-art in sentiment analysis of tweets, X. Zhu, arXiv:1308.6242

  2. Ruder S, Ghaffari P, Breslin JG (2016) A hierarchical model of reviews for aspect-based sentiment analysis, arXiv:1609.02745

  3. Qiu G, Liu B, Bu J, Chen C (2011) Opinion word expansion and target extraction through double propagation. Comput Linguist 37(1):9. https://doi.org/10.1162/coli_a_00034

    Article  Google Scholar 

  4. Poria S, Cambria E, Ku LW, Gui C, Gelbukh A (2014) A rule-based approach to aspect extraction from product reviews. SocialNLP 2014: https://doi.org/10.3115/v1/W14-5905

  5. Shu L, Xu H, Liu B (2017) Lifelong learning CRF for supervised aspect extraction, arXiv:1705.00251

  6. Wang W, Pan S.J, Dahlmeier D, Xiao X (2016) Recursive neural conditional random fields for aspect-based sentiment analysis, arXiv:1603.06679

  7. Wang Y, Huang M, Zhu X, Zhao L (2016) Attention-based LSTM for aspect-level sentiment classification In: Proceedings of the 2016 conference on empirical methods in natural language processing, pp 606–615

  8. Ma D, Li S, Zhang X, Wang H (2017) Interactive attention networks for aspect-level sentiment classification, arXiv:1709.00893

  9. Shu L, Xu H, Liu B (2017) Lifelong learning CRF for supervised aspect extraction. In: Proceedings of the 55th annual meeting of the association for computational linguistics (volume 2: short papers) (Association for Computational Linguistics, Vancouver, Canada), pp 148–154. https://doi.org/10.18653/v1/P17-2023. https://www.aclweb.org/anthology/P17-2023

  10. Wang W, Pan S.J, Dahlmeier D, Xiao X (2016) Recursive neural conditional random fields for aspect-based sentiment analysis. In: Proceedings of the 2016 conference on empirical methods in natural language processing (association for computational linguistics, Austin, Texas), pp 616–626. https://doi.org/10.18653/v1/D16-1059. https://www.aclweb.org/anthology/D16-1059

  11. Luo H, Li T, Liu B, Zhang J (2019) DOER: Dual Cross-Shared RNN for Aspect Term-Polarity Co-Extraction. arXiv:1906.01794

  12. Huang Z, Xu W, Yu K (2015) Bidirectional LSTM-CRF Models for Sequence Taggin. arXiv:1508.01991

  13. Tang D, Qin B, Feng X, Liu T (2016) Effective LSTMs for target-dependent sentiment classification. In: Proceedings of COLING 2016, the 26th international conference on computational linguistics: technical papers (the COLING 2016 Organizing Committee, Osaka, Japan), pp 3298–3307. https://www.aclweb.org/anthology/C16-1311

  14. Wang Y, Huang M, Zhu X, Zhao L (2016) Attention-based LSTM for aspect-level sentiment classification. In: Proceedings of the 2016 conference on empirical methods in natural language processing (association for computational linguistics, Austin, Texas), pp. 606–615. https://doi.org/10.18653/v1/D16-1058. https://www.aclweb.org/anthology/D16-1058

  15. Kaji N, Kitsuregawa M (2007) Building lexicon for sentiment analysis from massive collection of HTML documents. In: Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL), pp 1075–1083

  16. Rao D, Ravichandran D (2009) Semi-supervised polarity lexicon induction. In: Proceedings of the 12th conference of the european chapter of the association for computational linguistics (association for computational linguistics), pp 675–682

  17. Perez-Rosas V, Banea C, Mihalcea R (2012) Learning Sentiment Lexicons in Spanish. In: LREC, vol. 12 , p 73

  18. Caruana R (1997) Multitask learning. Mach Learn 28(1):41

    Article  MathSciNet  Google Scholar 

  19. Rana TA, Cheah YN (2016) Aspect extraction in sentiment analysis: comparative analysis and survey. Artif Intell Rev 46(4):459

    Article  Google Scholar 

  20. Singh V.K, Piryani R, Uddin A, Waila P (2013) Sentiment analysis of movie reviews: a new feature-based heuristic for aspect-level sentiment classification. In: 2013 international mutli-conference on automation, computing, communication, control and compressed sensing (iMac4s) (IEEE), pp 712–717

  21. Steinberger J, Brychcín T, Konkol M (2014) Aspect-level sentiment analysis in czech. In: Proceedings of the 5th workshop on computational approaches to subjectivity, sentiment and social media analysis, pp 24–30

  22. Socher R, Pennington J, Huang EH, Ng AY, Manning CD (2011) Semi-supervised recursive autoencoders for predicting sentiment distributions. In: Proceedings of the conference on empirical methods in natural language processing (Association for Computational Linguistics), pp 151–161

  23. Dong L, Wei F, Tan C, Tang D, Zhou M, Xu K (2014) Adaptive recursive neural network for target-dependent twitter sentiment classification. In: Proceedings of the 52nd annual meeting of the association for computational linguistics (volume 2: Short papers), pp 49–54

  24. Chen F, Yuan Z, Huang Y (2020) Multi-source data fusion for aspect-level sentiment classification. Knowl Based Syst 187:104831

    Article  Google Scholar 

  25. Nandal N, Tanwar R, Pruthi J (2020) Machine learning based aspect level sentiment analysis for Amazon products. Spatial Inf Res pp 1–7

  26. Halim Z, Ali O, Khan G (2019) On the efficient representation of datasets as graphs to mine maximal frequent itemsets. IEEE Trans Knowl Data Eng

  27. Shams M, Khoshavi N, Baraani-Dastjerdi A (2020) LISA: language-independent method for aspect-based sentiment analysis. IEEE Access 8:31034

    Article  Google Scholar 

  28. Halim Z, Atif M, Rashid A (2017) Profiling players using real-world datasets: clustering the data and correlating the results with the big-five personality traits, C.A. Edwin, IEEE Transactions on Affective Computing

  29. Tang D, Qin B, Feng X, Liu T (2015) Effective LSTMs for target-dependent sentiment classification, . arXiv:1512.01100

  30. Liu B (2012) Sentiment analysis and opinion mining. Synth Lect Human Lang Technol 5(1):1

    Article  Google Scholar 

  31. Pontiki M, Galanis D, Papageorgiou H, Androutsopoulos I, Manandhar S, Mohammad AS, Al-Ayyoub M, Zhao Y, Qin B, De Clercq O, et al (2016) Semeval-2016 task 5: Aspect based sentiment analysis, In: Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), pp 19–30

  32. Angelidis S, Lapata M (2018) Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised. arXiv:1808.08858

  33. Hu M, Liu B (2004) Mining and summarizing customer reviews. In: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining (ACM), pp 168–177

  34. Popescu A.M, Etzioni O (2007) Extracting product features and opinions from reviews. In: Natural language processing and text mining (Springer), pp 9–28

  35. Blair-Goldensohn S, Hannan K, McDonald R, Neylon T, Reis G, Reynar J (2008) Building a sentiment summarizer for local service reviews

  36. Poria S, Cambria E, Gelbukh A (2016) Aspect extraction for opinion mining with a deep convolutional neural network. Knowl Based Syst 108:42

    Article  Google Scholar 

  37. Zhang L, Wang S, Liu B (2018) Deep learning for sentiment analysis: a survey. Wiley Interdiscip Rev Data Min Knowl Discov 8(4):e1253

    Article  Google Scholar 

  38. He R, Lee W.S, Ng H.T, Dahlmeier D (2017) An unsupervised neural attention model for aspect extraction. In: Proceedings of the 55th annual meeting of the association for computational linguistics (Volume 1: Long Papers) pp 388–397

  39. Srivastava A, Sutton C (2017) Autoencoding variational inference for topic models. arXiv:1703.01488

  40. Ruder S, Peters ME, Swayamdipta S, Wolf T (2019) Transfer learning in natural language processing. In: Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: Tutorials pp 15–18

  41. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems pp 3111–3119

  42. Le Q, Mikolov T (2014) Distributed representations of sentences and documents. In: International conference on machine learning, pp 1188–1196

  43. Conneau A, Kiela D, Schwenk H, Barrault L, Bordes A (2017) Supervised learning of universal sentence representations from natural language inference data. arXiv:1705.02364

  44. McCann B, Bradbury J, Xiong C, Socher R (2017) Learned in translation: contextualized word vectors. In: Advances in neural information processing systems, pp 6294–6305

  45. Peters M.E, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, Zettlemoyer L (2018) Deep contextualized word representations. arXiv:1802.05365

  46. Ando RK, Zhang T (2005) A framework for learning predictive structures from multiple tasks and unlabeled data. J Mach Learn Res 6:1817

    MathSciNet  MATH  Google Scholar 

  47. Lin D, Wu X (2009) Phrase clustering for discriminative learning. In: Proceedings of the joint conference of the 47th annual meeting of the ACL and the 4th international joint conference on natural language processing of the AFNLP: Volume 2-Volume 2 (Association for Computational Linguistics), pp 1030–1038

  48. Peters M.E, Ammar W, Bhagavatula C, Power R (2017) Semi-supervised sequence tagging with bidirectional language models. arXiv:1705.00108

  49. Akbik v, Blythe D, Vollgraf R (2018) Contextual string embeddings for sequence labeling. In: Proceedings of the 27th International Conference on Computational Linguistics, pp 1638–1649

  50. Baevski A, Edunov S, Liu Y, Zettlemoyer L, Auli M (2019) Cloze-driven pretraining of self-attention networks. arXiv:1903.07785

  51. Poria S, Cambria E, Gelbukh A (2016) Aspect extraction for opinion mining with a deep convolutional neural network. Knowl Based Syst. https://doi.org/10.1016/j.knosys.2016.06.009, http://www.sciencedirect.com/science/article/pii/S0950705116301721. New Avenues in Knowledge Bases for Natural Language Processing

  52. Pennington J, Socher R, Manning C (2014) In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp 1532–1543

  53. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735

    Article  Google Scholar 

  54. Chung J, Gülçehre Ç, Cho K, Bengio Y (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv:1412.3555

  55. Zhang Y, Yang Q (2017) A survey on multi-task learning. arXiv:1707.08114

  56. Liu P, Qiu X, Huang X (2016) Recurrent neural network for text classification with multi-task learning. arXiv:1605.05101

  57. Liu X, He P, Chen W, Gao J (2019) Multi-task deep neural networks for natural language understanding. arXiv:1901.11504

  58. Yang Z, Salakhutdinov R, Cohen W (2016) Multi-task cross-lingual sequence tagging from scratch. arXiv:1603.06270

  59. Kingma DP, Ba J (2015) Adam: a Method for Stochastic Optimization. In: Proceedings of ICLR 2015

  60. Ruder S, Plank B (2018) Strong Baselines for Neural Semi-Supervised Learning under Domain Shift In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, ed. by I. Gurevych, Y. Miyao (Association for Computational Linguistics), pp 1044–1054. https://doi.org/10.18653/v1/P18-1096. https://www.aclweb.org/anthology/P18-1096/

  61. Elsahar H, Gallé M (2019) To Annotate or Not? Predicting Performance Drop under Domain Shift, In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (Association for Computational Linguistics, Hong Kong, China), pp 2163–2173. https://doi.org/10.18653/v1/D19-1222. https://www.aclweb.org/anthology/D19-1222

  62. Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding, In: Proceedings of the 2019 conference of the north American chapter of the association for computational linguistics: human language technologies, volume 1 (Long and Short Papers) (Association for Computational Linguistics, Minneapolis, Minnesota), pp 4171–4186. https://doi.org/10.18653/v1/N19-1423. https://www.aclweb.org/anthology/N19-1423

  63. Speer R, Chin J, Havasi C (2017) ConceptNet 5.5: an open multilingual graph of general knowledge. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI Press), AAAI’17, p. 4444-4451

Download references

Acknowledgements

This research is supported by A*STAR under its RIE 2020 Advanced Manufacturing and Engineering (AME) programmatic grant, Award No. A19E2b0098, Project name: K-EMERGE: Knowledge Extraction, Modelling, and Explainable Reasoning for General Expertise.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Soujanya Poria.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Majumder, N., Bhardwaj, R., Poria, S. et al. Improving aspect-level sentiment analysis with aspect extraction. Neural Comput & Applic 34, 8333–8343 (2022). https://doi.org/10.1007/s00521-020-05287-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-05287-7

Keywords

Navigation