Skip to main content

A Neural Topic Model Based on Variational Auto-Encoder for Aspect Extraction from Opinion Texts

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11838))

Abstract

Aspect extraction is an important task in ABSA (Aspect Based Sentiment Analysis). To address this task, in this paper we propose a novel variant of neural topic model based on Variational Auto-encoder (VAE), which consists of an aspect encoder, an auxiliary encoder and a hierarchical decoder. The difference from previous neural topic model based approaches is that our proposed model builds latent variable in multiple vector spaces and it is able to learn latent semantic representation in better granularity. Additionally, it also provides a direct and effective solution for unsupervised aspect extraction, thus it is beneficial for low-resource processing. Experimental evaluation conducted on both a Chinese corpus and an English corpus have demonstrated that our model has better capacity of text modeling, and substantially outperforms previous state-of-the-art unsupervised approaches for aspect extraction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Zhao, W.X., Jiang, J., Yan, H., Li, X.: Jointly modeling aspects and opinions with a MaxEnt-LDA hybrid. In: Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (2010)

    Google Scholar 

  • Brody, S., Elhadad, N.: An unsupervised aspect-sentiment model for online reviews. In: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (2010)

    Google Scholar 

  • Miao, Y., Yu, L., Blunsom, P.: Neural variational inference for text processing. In: International Conference on Machine Learning, pp. 1727–1736 (2016)

    Google Scholar 

  • Ding, R., Nallapati, R., Xiang, B.: Coherence-aware neural topic modeling. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 830–836 (2018)

    Google Scholar 

  • Miao, Y., Grefenstette, E., Blunsom, P.: Discovering discrete latent topics with neural variational inference. In: International Conference on Machine Learning, pp. 2410–2419 (2017)

    Google Scholar 

  • Srivastava, A., Sutton, C.: Autoencoding variational inference for topic models. arXiv preprint arXiv:1703.01488 (2017)

  • Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)

    MATH  Google Scholar 

  • He, R., Lee, W.S., Ng, H.T., Dahlmeier, D.: An unsupervised neural attention model for aspect extraction. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 388–397 (2017)

    Google Scholar 

  • Wang, L., Liu, K., Cao, Z., Zhao, J., de Melo, G.: Sentiment-aspect extraction based on restricted Boltzmann machines. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (2015)

    Google Scholar 

  • Kingma, D.P., Welling, M.: Autoencoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)

  • Grandvalet, Y., Bengio, Y., et al.: Semi-supervised learning by entropy minimization. In: NIPS, vol. 17, pp. 529–536 (2004)

    Google Scholar 

  • Zhao, T., Lee, K., Eskenazi, M.: Unsupervised discrete sentence representation learning for interpretable neural dialog generation. arXiv:1804.08069 (2018)

  • Zeng, J., Li, J., Song, Y., Gao, C., Lyu, M.R., King, I.: Topic memory networks for short text classification. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3120–3131 (2018)

    Google Scholar 

  • Chen, Z., Mukherjee, A., Liu, B.: Aspect extraction with automated prior knowledge learning. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pp. 347–358 (2014)

    Google Scholar 

  • Hu, M., Liu, B.: Mining and summarizing customer reviews. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2004)

    Google Scholar 

  • Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio, Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: Proceedings of the 28th International Conference on Machine Learning (2011)

    Google Scholar 

  • Serban, I.V., Sordoni, A., Bengio, Y., Courville, A., Pineau, J.: Building end-to-end dialogue systems using generative hierarchical neural network models. In: Association for the Advancement of Artificial Intelligence (2015)

    Google Scholar 

  • Serban, I.V., et al.: A hierarchical latent variable encoder-decoder model for generating dialogues. arXiv preprint arXiv:1605.06069 (2016)

  • Gumbel, E.J., Lieblein, J.: Statistical Theory of Extreme Values and Some Practical Applications: A Series of Lectures. US Government Printing Office, Washington (1954)

    Google Scholar 

  • Zhou, C., Neubig, G.: Multi-space variational encoder-decoders for semi-supervised labeled sequence transduction. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 310–320 (2017)

    Google Scholar 

  • Reed, S., Lee, H., Anguelov, D., Szegedy, C., Erhan, D., Rabinovich, A.: Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596 (2014)

  • Bowman, S.R., Vilnis, L.: Generating sentences from a continuous space. arXiv:1511.06349v4 (2016)

  • Chang, J., Boyd-Graber, J., Chong, W., Gerrish, S., Blei, M.: Reading tea leaves: how humans interpret topic models. In: Proceedings of NIPS, pp. 288–296 (2009)

    Google Scholar 

  • Ma, S., Sun, X., Lin, J., Wang, H.: Autoencoder as assistant supervisor: improving text representation for Chinese social media text summarization. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Short Papers), pp. 725–731 (2018)

    Google Scholar 

  • Ganu, G., Elhadad, N., Marian, A.: Beyond the stars: improving rating predictions using review text content. In: Proceedings of the 12th International Workshop on the Web and Databases (2009)

    Google Scholar 

  • Mimno, D., Wallach, H.M., Talley, E., Leenders, M., McCallum, A.: Optimizing semantic coherence in topic models. In: Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (2011)

    Google Scholar 

  • Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. arXiv:1607.04606 (2016)

  • Hu, Z., Yang, Z., Liang, X., Salakhutdinov, R., Xing, E.P.: Controllable text generation. arXiv preprint arXiv:1703.00955 (2017)

  • Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: Proceedings of the 2nd International Conference on Learning Representations (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuanchao Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cui, P., Liu, Y., Liu, B. (2019). A Neural Topic Model Based on Variational Auto-Encoder for Aspect Extraction from Opinion Texts. In: Tang, J., Kan, MY., Zhao, D., Li, S., Zan, H. (eds) Natural Language Processing and Chinese Computing. NLPCC 2019. Lecture Notes in Computer Science(), vol 11838. Springer, Cham. https://doi.org/10.1007/978-3-030-32233-5_51

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32233-5_51

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32232-8

  • Online ISBN: 978-3-030-32233-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics