Skip to main content

Implicit Objective Network for Emotion Detection

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11838))

Abstract

Emotion detection has been extensively researched in recent years. However, existing work mainly focuses on recognizing explicit emotion expressions in a piece of text. Little work is proposed for detecting implicit emotions, which are ubiquitous in people’s expression. In this paper, we propose an Implicit Objective Network to improve the performance of implicit emotion detection. We first capture the implicit sentiment objective as a latent variable by using a variational autoencoder. Then we leverage the latent objective into the classifier as prior information for better make prediction. Experimental results on two benchmark datasets show that the proposed model outperforms strong baselines, achieving the state-of-the-art performance.

This work is supported by the National Natural Science Foundation of China (No.61702121, No.61772378).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    The reason we scale the dot products by \(\sqrt{D}\) is to counteract the effect that, if D is large enough, the sum of the dot products will grow large, pushing softmax into regions 0 or 1 [25].

  2. 2.

    http://nlp.stanford.edu/projects/glove/.

References

  1. Bahuleyan, H., Mou, L., Vechtomova, O., Poupart, P.: Variational attention for sequence-to-sequence models. arXiv preprint arXiv:1712.08207 (2017)

  2. Balahur, A., Hermida, J.M., Montoyo, A.: Detecting implicit expressions of emotion in text: a comparative analysis. Decis. Support Syst. 53(4), 742–753 (2012)

    Article  Google Scholar 

  3. Balazs, J.A., Marrese-Taylor, E., Matsuo, Y.: IIIDYT at IEST 2018: implicit emotion classification with deep contextualized word representations. arXiv preprint arXiv:1808.08672 (2018)

  4. Bowman, S.R., Vilnis, L., Vinyals, O., Dai, A.M., Jozefowicz, R., Bengio, S.: Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349 (2015)

  5. Goyal, A.G.A.P., Sordoni, A., Côté, M.A., Ke, N.R., Bengio, Y.: Z-forcing: training stochastic recurrent networks. In: Proceedings of Advances in Neural Information Processing Systems, pp. 6713–6723 (2017)

    Google Scholar 

  6. Hu, M., Liu, B.: Mining and summarizing customer reviews. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 168–177. ACM (2004)

    Google Scholar 

  7. Johnson, R., Zhang, T.: Supervised and semi-supervised text categorization using LSTM for region embeddings. arXiv preprint arXiv:1602.02373 (2016)

  8. Johnson, R., Zhang, T.: Deep pyramid convolutional neural networks for text categorization. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 562–570 (2017)

    Google Scholar 

  9. Joulin, A., Grave, E., Bojanowski, P., Mikolov, T.: Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759 (2016)

  10. Kamal, R., Shah, M.A., Maple, C., Masood, M., Wahid, A., Mehmood, A.: Emotion classification and crowd source sensing; a lexicon based approach. IEEE Access 7, 27124–27134 (2019)

    Article  Google Scholar 

  11. Kim, Y.: Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014)

  12. Klinger, R., De Clercq, O., Mohammad, S.M., Balahur, A.: IEST: WASSA-2018 implicit emotions shared task. arXiv preprint arXiv:1809.01083 (2018)

  13. Lai, S., Xu, L., Liu, K., Zhao, J.: Recurrent convolutional neural networks for text classification. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (2015)

    Google Scholar 

  14. Le, H., Tran, T., Nguyen, T., Venkatesh, S.: Variational memory encoder-decoder. In: Proceedings of Advances in Neural Information Processing Systems, pp. 1508–1518 (2018)

    Google Scholar 

  15. Liu, B.: Sentiment analysis and opinion mining. Synth. Lect. Hum. Lang. Technol. 5(1), 1–167 (2012)

    Article  Google Scholar 

  16. Maaten, L.V.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)

    MATH  Google Scholar 

  17. Miao, Y., Yu, L., Blunsom, P.: Neural variational inference for text processing. In: Proceedings of the International Conference on Machine Learning, pp. 1727–1736 (2016)

    Google Scholar 

  18. Pang, B., Lee, L., Vaithyanathan, S.: Thumbs up?: sentiment classification using machine learning techniques. In: Proceedings of the ACL-2002 Conference on Empirical Methods in Natural Language Processing, pp. 79–86. Association for Computational Linguistics (2002)

    Google Scholar 

  19. Ren, H., Ren, Y., Li, X., Feng, W., Liu, M.: Natural logic inference for emotion detection. In: Sun, M., Wang, X., Chang, B., Xiong, D. (eds.) CCL/NLP-NABD -2017. LNCS (LNAI), vol. 10565, pp. 424–436. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-69005-6_35

    Chapter  Google Scholar 

  20. Ren, Y., Wang, R., Ji, D.: A topic-enhanced word embedding for twitter sentiment classification. Inf. Sci. 369, 188–198 (2016)

    Article  Google Scholar 

  21. Ren, Y., Zhang, Y., Zhang, M., Ji, D.: Context-sensitive twitter sentiment classification using neural network. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)

    Google Scholar 

  22. Rozental, A., Fleischer, D., Kelrich, Z.: Amobee at IEST 2018: transfer learning from language models. arXiv preprint arXiv:1808.08782 (2018)

  23. Scherer, K.R.: What are emotions? And how can they be measured? Soc. Sci. Inf. 44(4), 695–729 (2005)

    Article  Google Scholar 

  24. Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 45(11), 2673–2681 (1997)

    Article  Google Scholar 

  25. Vaswani, A., et al.: Attention is all you need. In: Proceedings of Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  26. Zhou, P., Qi, Z., Zheng, S., Xu, J., Bao, H., Xu, B.: Text classification improved by integrating bidirectional LSTM with two-dimensional max pooling. arXiv preprint arXiv:1611.06639 (2016)

  27. Zhou, P., et al.: Attention-based bidirectional long short-term memory networks for relation classification. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 207–212 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yafeng Ren or Donghong Ji .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fei, H., Ren, Y., Ji, D. (2019). Implicit Objective Network for Emotion Detection. In: Tang, J., Kan, MY., Zhao, D., Li, S., Zan, H. (eds) Natural Language Processing and Chinese Computing. NLPCC 2019. Lecture Notes in Computer Science(), vol 11838. Springer, Cham. https://doi.org/10.1007/978-3-030-32233-5_50

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32233-5_50

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32232-8

  • Online ISBN: 978-3-030-32233-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics