Skip to main content

T2S: An Encoder-Decoder Model for Topic-Based Natural Language Generation

  • Conference paper
  • First Online:
Natural Language Processing and Information Systems (NLDB 2018)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10859))

  • 2532 Accesses

Abstract

Natural language generation (NLG) plays a critical role in various natural language processing (NLP) applications. And the topics provide a powerful tool to understand the natural language. We propose a novel topic-based NLG model which can generate topic coherent sentences given single topic or combination of topics. The model is an extension of the recurrent encoder-decoder framework by introducing a global topic embedding matrix. Experimental results show that our encoder can not only transform a source sentence to a representative topic distribution which can give a better interpretation of the source sentence, but also generate topic coherent and diversified sentences given different topic distribution without any text-level input.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Available at https://www.kaggle.com/c/predict-closed-questions-on-stack-overflow/download/train.zip.

  2. 2.

    Pre-trained word vectors of Glove can be obtained from http://nlp.stanford.edu/projects/glove/.

  3. 3.

    https://www.tensorflow.org/.

References

  1. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)

    MATH  Google Scholar 

  2. Bowman, S.R., Vilnis, L., Vinyals, O., Dai, A.M., Józefowicz, R., Bengio, S.: Generating sentences from a continuous space. In: Proceedings of CoNLL, pp. 10–21 (2016)

    Google Scholar 

  3. Cao, Z., Li, S., Liu, Y., Li, W., Ji, H.: A novel neural topic model and its supervised extension. In: Proceedings of AAAI, pp. 2210–2216 (2015)

    Google Scholar 

  4. Cho, K., van Merrienboer, B., Gülçehre, Ç., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of EMNLP, pp. 1724–1734 (2014)

    Google Scholar 

  5. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  6. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. CoRR abs/1412.6980 (2014)

    Google Scholar 

  7. Kiros, R., Zhu, Y., Salakhutdinov, R., Zemel, R.S., Urtasun, R., Torralba, A., Fidler, S.: Skip-thought vectors. In: Proceeding of NIPS, pp. 3294–3302 (2015)

    Google Scholar 

  8. Lai, S., Xu, L., Liu, K., Zhao, J.: Recurrent convolutional neural networks for text classification. In: Proceedings of AAAI, pp. 2267–2273 (2015)

    Google Scholar 

  9. Li, J., Luong, M., Jurafsky, D.: A hierarchical neural autoencoder for paragraphs and documents. In: Proceedings of ACL, pp. 1106–1115 (2015)

    Google Scholar 

  10. Liu, Y., Liu, Z., Chua, T., Sun, M.: Topical word embeddings. In: Proceedings of AAAI, pp. 2418–2424 (2015)

    Google Scholar 

  11. Mikolov, T., Zweig, G.: Context dependent recurrent neural network language model. In: Proceedings of SLT, pp. 234–239 (2012)

    Google Scholar 

  12. Neubig, G.: Neural machine translation and sequence-to-sequence models: A tutorial. CoRR abs/1703.01619 (2017)

    Google Scholar 

  13. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: Proceedings of EMNLP, pp. 1532–1543 (2014)

    Google Scholar 

  14. Pontiki, M., Galanis, D., Pavlopoulos, J., Papageorgiou, H., Androutsopoulos, I., Manandhar, S.: Semeval-2014 task 4: aspect based sentiment analysis. In: Proceedings of SemEval@COLING, pp. 27–35 (2014)

    Google Scholar 

  15. Serban, I.V., Sordoni, A., Lowe, R., Charlin, L., Pineau, J., Courville, A.C., Bengio, Y.: A hierarchical latent variable encoder-decoder model for generating dialogues. In: Proceedings of AAAI, pp. 3295–3301 (2017)

    Google Scholar 

  16. Shang, L., Lu, Z., Li, H.: Neural responding machine for short-text conversation. In: Proceedings of ACL, pp. 1577–1586 (2015)

    Google Scholar 

  17. Sordoni, A., Galley, M., Auli, M., Brockett, C., Ji, Y., Mitchell, M., Nie, J., Gao, J., Dolan, B.: A neural network approach to context-sensitive generation of conversational responses. In: Proceedings of NAACL-HLT, pp. 196–205 (2015)

    Google Scholar 

  18. Tran, V., Nguyen, L., Tojo, S.: Neural-based natural language generation in dialogue using RNN encoder-decoder with semantic aggregation. In: Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, Saarbrücken, Germany, 15–17 August 2017. pp. 231–240 (2017)

    Google Scholar 

  19. Wen, T., Gasic, M., Kim, D., Mrksic, N., Su, P., Vandyke, D., Young, S.J.: Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking. In: Proceedings of SIGDIAL, pp. 275–284 (2015)

    Google Scholar 

  20. Wen, T., Gasic, M., Mrksic, N., Rojas-Barahona, L.M., Su, P., Vandyke, D., Young, S.J.: Multi-domain neural network language generation for spoken dialogue systems. In: Proceedings of NAACL-HLT, pp. 120–129 (2016)

    Google Scholar 

  21. Wen, T., Gasic, M., Mrksic, N., Su, P., Vandyke, D., Young, S.J.: Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In: Proceedings of EMNLP, pp. 1711–1721 (2015)

    Google Scholar 

  22. Xing, C., Wu, W., Wu, Y., Liu, J., Huang, Y., Zhou, M., Ma, W.: Topic aware neural response generation. In: Proceedings of AAAI, pp. 3351–3357 (2017)

    Google Scholar 

  23. Xu, J., Xu, B., Wang, P., Zheng, S., Tian, G., Zhao, J., Xu, B.: Self-taught convolutional neural networks for short text clustering. CoRR abs/1701.00185 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiangtao Ren .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ou, W., Chen, C., Ren, J. (2018). T2S: An Encoder-Decoder Model for Topic-Based Natural Language Generation. In: Silberztein, M., Atigui, F., Kornyshova, E., Métais, E., Meziane, F. (eds) Natural Language Processing and Information Systems. NLDB 2018. Lecture Notes in Computer Science(), vol 10859. Springer, Cham. https://doi.org/10.1007/978-3-319-91947-8_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-91947-8_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-91946-1

  • Online ISBN: 978-3-319-91947-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics