Skip to main content

Integrating Topic Information into VAE for Text Semantic Similarity

  • Conference paper
  • First Online:
  • 2544 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11305))

Abstract

Representation learning is an essential process in the text similarity task. The methods based on neural variational inference first learn the semantic representation of the texts, and then measure the similar degree of these texts by calculating the cosine of their representations. However, it is not generally desirable that using the neural network simply to learn semantic representation as it cannot capture the rich semantic information completely. Considering that the similarity of context information reflects the similarity of text pairs in most cases, we integrate the topic information into a stacked variational autoencoder in process of text representation learning. The improved text representations are used in text similarity calculation. Experiment shows that our approach obtains the state-of-art performance.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Lin, R., Liu, S., Yang, M., Li, M., Zhou, M., Li, S.: Hierarchical recurrent neural network for document modeling. In: Conference on Empirical Methods in Natural Language Processing, Lisbon, pp. 899–907 (2015)

    Google Scholar 

  2. Liu, P., Qiu, X., Chen, X., Wu, S., Huang, X.: Multi-timescale long short-term memory neural network for modelling sentences and documents. In: Conference on Empirical Methods in Natural Language Processing, Lisbon, pp. 2326–2335 (2015)

    Google Scholar 

  3. Tai, K.S., Socher, R., Manning, C.D.: Improved semantic representations from tree-structured long short-term memory networks. In: The 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Beijing, pp. 1556–1566 (2015)

    Google Scholar 

  4. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv:1312.6114 (2014)

  5. Bowman, S.R., Vilnis, L., Vinyals, O., Dai, A.M., Jozefowicz, R., Bengio, S.: Generating sentences from a continuous space. In: SIGNLL Conference on Computational Natural Language Learning (CONLL), Berlin, pp. 10–21 (2016)

    Google Scholar 

  6. Miao, Y., Yu, L., Blunsom, P.: Neural variational inference for text processing. In: the 33rd International Conference on International Conference on Machine Learning, New York, pp. 1727–1736 (2016)

    Google Scholar 

  7. Amiri, H., Resnik, P., Boyd-Graber, J., Daumé III, H.: Learning text pair similarity with context-sensitive autoencoders. In: The 54th Annual Meeting of the Association for Computational Linguistics, Berlin, pp. 1882–1892 (2016)

    Google Scholar 

  8. Huang, E.H., Socher, R., Manning, C.D., Ng, A.Y.: Improving word representations via global context and multiple word prototypes. In: The 50th Annual Meeting of the Association for Computational Linguistics, Jeju, pp. 873–882 (2012)

    Google Scholar 

  9. Marelli, M., Menini, S., Baroni, M., Bentivogli, L., Bernardi, R., Zamparelli, R.: A SICK cure for the evaluation of compositional distributional semantic models. In: The Ninth International Conference on Language Resources and Evaluation (2014)

    Google Scholar 

  10. Xu, W., Sun, H., Deng, C., Tan, Y.: Variational autoencoders for semi-supervised text classification. In: The Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, pp. 3358–3364 (2017)

    Google Scholar 

  11. Serban, I.V., et al.: A hierarchical latent variable encoder-decoder model for generating dialogues. In: Thirty-First AAAI Conference on Artificial Intelligence, San Francisco (2017)

    Google Scholar 

  12. Miao, Y., Blunsom, P.: Language as a latent variable: discrete generative models for sentence compression. In: The 2016 Conference on Empirical Methods in Natural Language Processing, Austin, pp. 319–328 (2016)

    Google Scholar 

  13. Sønderby, C.K., Raiko, T., Maaløe, L., Sønderby, S.K., Winther, O.: How to train deep variational autoencoders and probabilistic ladder networks. In: The 33rd International Conference on Machine Learning, New York (2016)

    Google Scholar 

  14. Mikolov, T., Zweig, G.: Context dependent recurrent neural network language model. In: Spoken Language Technology Workshop, Miami, pp. 234–239 (2013)

    Google Scholar 

  15. Ji, Y., Cohn, T., Kong, L., Dyer, C., Eisenstein, J.: Document context language models. In: ICLR 2016 Workshop Track, San Juan, pp. 1–10 (2016)

    Google Scholar 

  16. Ghosh, S., Vinyals, O., Strope, B., Roy, S., Dean, T., Heck, L.: Contextual LSTM (CLSTM) models for Large scale NLP tasks. arXiv:1602.06291 (2016)

  17. Dieng, A.B., Wang, C., Gao, J., Paisley, J.: TopicRNN: a recurrent neural network with long-range semantic dependency. arXiv:1611.01702 (2016)

  18. Xing, C., et al.: Topic augmented neural response generation with a joint attention mechanism. arXiv:1606.08340 (2016)

  19. Wieting, J., Bansal, M., Gimpel, K., Livescu, K.: From paraphrase database to compositional paraphrase model and back. J. Trans. Assoc. Comput. Linguist. 3, 345–358 (2015)

    Google Scholar 

  20. Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. In: The 31st International Conference on Machine Learning, PMLR, vol. 32, no. 2, pp. 1278–1286 (2014)

    Google Scholar 

  21. Chung, J., Kastner, K., Dinh, L., Goel, K., Courville, A., Bengio, Y.: A recurrent latent variable model for sequential data. In: The 28th International Conference on Neural Information Processing Systems, Montreal, vol. 2, pp. 2980–2988 (2015)

    Google Scholar 

  22. Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Conference on Empirical Methods in Natural Language Processing, Doha, pp. 1532–1543 (2014)

    Google Scholar 

  23. Rothe, S., Schütze, H.: Autoextend: extending word embeddings to embeddings for synsets and lexemes. In: the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Beijing, pp. 1793–1803 (2015)

    Google Scholar 

  24. Zheng, X., Feng, J., Chen, Y., Peng, H., Zhang, W.: Learning context-specific word/character embeddings. In: The Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, pp. 3393–3399 (2017)

    Google Scholar 

  25. Hill, F., Cho, K., Korhonen, A.: Learning distributed representations of sentences from unlabelled data. In: NAACL-HLT, San Diego, pp. 1367–1377 (2016)

    Google Scholar 

  26. Pagliardini, M., Gupta, P., Jaggi, M.: Unsupervised learning of sentence embeddings using compositional n-gram features. arXiv:1703.02507 (2017)

  27. Li, B.F., Liu, T., Zhao, Z., Wang, P.W., Du, X.Y.: Neural bag-of-ngrams. In: The Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, pp. 3067–3074 (2017)

    Google Scholar 

  28. Chen, X., Liu, Z., Sun, M.: A unified model for word sense representation and disambiguation. In: Conference on Empirical Methods in Natural Language Processing, pp. 1025–1035 (2014)

    Google Scholar 

  29. Neelakantan, A., Shankar, J., Passos, A., Mccallum, A.: Efficient non-parametric estimation of multiple embeddings per word in vector space. In: Conference on Empirical Methods in Natural Language Processing, pp. 1059–1069 (2014)

    Google Scholar 

Download references

Acknowledgements

This work was funded by National Natural Science Foundation of China (Grant No. 61762069), Natural Science Foundation of Inner Mongolia Autonomous Region (Grant No. 2017BS0601, Grant No. 2018MS06025) and program of higher-level talents of Inner Mongolia University (Grant No. 21500-5165161).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rong Yan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Su, X., Yan, R., Gong, Z., Fu, Y., Xu, H. (2018). Integrating Topic Information into VAE for Text Semantic Similarity. In: Cheng, L., Leung, A., Ozawa, S. (eds) Neural Information Processing. ICONIP 2018. Lecture Notes in Computer Science(), vol 11305. Springer, Cham. https://doi.org/10.1007/978-3-030-04221-9_49

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04221-9_49

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04220-2

  • Online ISBN: 978-3-030-04221-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics