Abstract
Sentiment analysis for textual contents has attracted lots of attentions. However, most existing models only utilize the target text to mine the deep relations from text representation features to sentiment values, ignoring users’ historicalally published texts, which also contain much valuable information. Correspondingly, in this paper we propose SentiMem, a new sentiment analysis framework that incorporates user’s historical texts to improve the accuracy of sentiment classification.
In SentiMem, to exploit users’ interests and preferences hidden in the texts, we adopt SenticNet to capture the concept-level semantics; as for users’ temperaments, we combine multiple sentiment lexicons with multi-head attention mechanism to extract users’ diverse characters. Then, two memory networks: Interests Memory Network and Temperaments Memory Network are used to store information about users’ interests and temperaments respectively. Interests memory is updated in a first-in-first-out way and read by an attention mechanism to match the users’ most recent interests with the target text. Temperaments memory is updated in a forgetting-and-strengthening manner to match the gradual shift of human’s characteristics. Additionally, we learn a global matrix to represent these common features among human’s temperaments, which is queried when classifying a user’s new posted text. Extensive experiments on two real-world datasets show that SentiMem can achieve significant improvement for accuracy over state-of-the-art methods.
This work was supported by the National Key R&D Program of China [2018YFB1004700]; the National Natural Science Foundation of China [61872238, 61972254]; the Shanghai Science and Technology Fund [17510740200], and the CCF-Huawei Database System Innovation Research Plan [CCF-Huawei DBIR2019002A].
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Cambria, E., Fu, J., Bisio, F., Poria, S.: Affectivespace 2: enabling affective intuition for concept-level sentiment analysis. In: Association for the Advancement of Artificial Intelligence (AAAI), pp. 508–514 (2015)
Cambria, E., Poria, S., Hazarika, D., Kwok, K.: Senticnet 5: discovering conceptual primitives for sentiment analysis by means of context embeddings. In: Association for the Advancement of Artificial Intelligence (AAAI), pp. 1795–1802 (2018)
Chen, X., Xu, H., Zhang, Y., Tang, J., Cao, Y., Qin, Z., Zha, H.: Sequential recommendation with user memory networks. In: ACM International Conference on Web Search and Data Mining (WSDM), pp. 108–116 (2018)
Cho, K., van Merrienboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: encoder-decoder approaches. In: Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pp. 103–111 (2014)
Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. CoRR abs/1810.04805 (2018). http://arxiv.org/abs/1810.04805
Dou, Z.Y.: Capturing user and product information for document level sentiment analysis with deep memory network. In: Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 521–526 (2017)
Hansen, L.K., Arvidsson, A., Nielsen, F.A., Colleoni, E., Etter, M.: Good friends, bad news - affect and virality in Twitter. In: Park, J.J., Yang, L.T., Lee, C. (eds.) FutureTech 2011. CCIS, vol. 185, pp. 34–43. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22309-9_5
He, R., McAuley, J.: Ups and downs: modeling the visual evolution of fashion trends with one-class collaborative filtering. In: International Conference on World Wide Web (WWW), pp. 507–517 (2016)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Hutto, C.J., Gilbert, E.: Vader: a parsimonious rule-based model for sentiment analysis of social media text. In: Association for the Advancement of Artificial Intelligence (AAAI) (2014)
Jagannatha, A.N., Yu, H.: Bidirectional RNN for medical event detection in electronic health records. In: Proceedings of NAACL-HLT, pp. 473–482 (2016)
Kiritchenko, S., Zhu, X., Mohammad, S.M.: Sentiment analysis of short informal texts. J. Artif. Intell. Res. (JAIR) 50, 723–762 (2014)
Koren, Y., Bell, R.M., Volinsky, C.: Matrix factorization techniques for recommender systems. IEEE Comput. 42(8), 30–37 (2009)
Ma, S., Sun, X., Lin, J., Ren, X.: A hierarchical end-to-end model for jointly improving text summarization and sentiment classification. In: International Joint Conference on Artificial Intelligence (IJCAI), pp. 4251–4257. AAAI Press (2018)
Ma, Y., Peng, H., Cambria, E.: Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive LSTM. In: Association for the Advancement of Artificial Intelligence (AAAI), pp. 5876–5883 (2018)
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
Mohammad, S.: Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 English words. In: Association for Computational Linguistics (ACL), pp. 174–184 (2018)
Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI Blog (2019)
Shin, B., Lee, T., Choi, J.D.: Lexicon integrated CNN models with attention for sentiment analysis. In: Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA), pp. 149–158 (2017)
Sukhbaatar, S., Weston, J., Fergus, R., et al.: End-to-end memory networks. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 2440–2448 (2015)
Tang, D., Qin, B., Liu, T.: Learning semantic representations of users and products for document level sentiment classification. In: Association for Computational Linguistics (ACL), pp. 1014–1023 (2015)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems (NIPS), pp. 5998–6008 (2017)
Wang, J., et al.: Aspect sentiment classification with both word-level and clause-level attention networks. In: International Joint Conferences on Artificial Intelligence (IJCAI), pp. 4439–4445 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Jia, X., Wu, Q., Gao, X., Chen, G. (2020). SentiMem: Attentive Memory Networks for Sentiment Classification in User Review. In: Nah, Y., Cui, B., Lee, SW., Yu, J.X., Moon, YS., Whang, S.E. (eds) Database Systems for Advanced Applications. DASFAA 2020. Lecture Notes in Computer Science(), vol 12112. Springer, Cham. https://doi.org/10.1007/978-3-030-59410-7_51
Download citation
DOI: https://doi.org/10.1007/978-3-030-59410-7_51
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59409-1
Online ISBN: 978-3-030-59410-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)