Skip to main content

Generating Long and Coherent Text with Multi-Level Generative Adversarial Networks

  • Conference paper
  • First Online:
Web and Big Data (APWeb-WAIM 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12859))

  • 1609 Accesses

Abstract

In this paper, we study the task of generating long and coherent text. In the literature, Generative Adversarial Nets (GAN) based methods have been one of the mainstream approaches to generic text generation. We aim to improve two aspects of GAN-based methods in generic text generation, namely long sequence optimization and semantic coherence enhancement. For this purpose, we propose a novel Multi-Level Generative Adversarial Networks (MLGAN) for long and coherent text generation. Our approach explicitly models the text generation process at three different levels, namely paragraph-, sentence- and word-level generation. At the top two levels, we generate continuous paragraph vectors and sentence vectors as semantic sketches to plan the entire content. While, at the bottom level we generate discrete word tokens for realizing the sentences. Furthermore, we utilize a Conditional GAN architecture to enhance the inter-sentence coherence by injecting paragraph vectors for sentence vector generation. Extensive experiments results have demonstrated the effectiveness of the proposed model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/chinese-poetry/chinese-poetry.

  2. 2.

    https://www.imdb.com.

  3. 3.

    https://github.com/brightmart/albert_zh.

References

  1. Brown, T.B., et al.: Language models are few-shot learners. In: NeurIPS (2020)

    Google Scholar 

  2. Che, T., et al.: Maximum-likelihood augmented discrete generative adversarial networks. arXiv preprint arXiv:1702.07983 (2017)

  3. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT (2019)

    Google Scholar 

  4. Dong, L., Lapata, M.: Coarse-to-fine decoding for neural semantic parsing. In: ACL (2018)

    Google Scholar 

  5. Garbacea, C., Mei, Q.: Neural language generation: formulation, methods, and evaluation. arXiv preprint arXiv:2007.15780 (2020)

  6. Graves, A.: Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850 (2013)

  7. Guo, J., Lu, S., Cai, H., Zhang, W., Yu, Y., Wang, J.: Long text generation via adversarial training with leaked information. In: AAAI (2018)

    Google Scholar 

  8. Hua, X., Wang, L.: Sentence-level content planning and style specification for neural text generation. In: EMNLP (2019)

    Google Scholar 

  9. Hyvönen, V., et al.: Fast nearest neighbor search through sparse random projections and voting. In: BigData (2016)

    Google Scholar 

  10. Jang, E., Gu, S., Poole, B.: Categorical reparameterization with gumbel-softmax. In: ICLR (2017)

    Google Scholar 

  11. Keskar, N.S., McCann, B., Varshney, L.R., Xiong, C., Socher, R.: Ctrl: a conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858 (2019)

  12. Kim, Y.: Convolutional neural networks for sentence classification. In: EMNLP (2014)

    Google Scholar 

  13. Lapata, M., Barzilay, R.: Automatic evaluation of text coherence: models and representations. In: IJCAI (2005)

    Google Scholar 

  14. Law, J., Charlton, J., Dockrell, J., Gascoigne, M., McKean, C., Theakston, A.: Early language development: needs, provision and intervention for pre-school children from socio-economically disadvantaged backgrounds. London Education Endowment Foundation (2017)

    Google Scholar 

  15. Li, J., Monroe, W., Shi, T., Jean, S., Ritter, A., Jurafsky, D.: Adversarial learning for neural dialogue generation. In: EMNLP (2017)

    Google Scholar 

  16. Li, J., et al.: Textbox: a unified, modularized, and extensible framework for text generation. In: ACL (2021)

    Google Scholar 

  17. Li, J., Tang, T., Zhao, W.X., Wen, J.: Pretrained language models for text generation: a survey. In: IJCAI (2021)

    Google Scholar 

  18. Li, J., Zhao, W.X., Wei, Z., Yuan, N.J., Wen, J.R.: Knowledge-based review generation by coherence enhanced text planning. In: SIGIR (2021)

    Google Scholar 

  19. Li, J., Zhao, W.X., Wen, J., Song, Y.: Generating long and informative reviews with aspect-aware coarse-to-fine decoding. In: ACL (2019)

    Google Scholar 

  20. Likert, R.: A technique for the measurement of attitudes. Arch. Psychol. (1932)

    Google Scholar 

  21. Lin, K., Li, D., He, X., Zhang, Z., Sun, M.T.: Adversarial ranking for language generation. In: NeurIPS (2017)

    Google Scholar 

  22. Liu, S., Zeng, S., Li, S.: Evaluating text coherence at sentence and paragraph levels. In: LREC (2020)

    Google Scholar 

  23. Lu, S., Zhu, Y., Zhang, W., Wang, J., Yu, Y.: Neural text generation: past, present and beyond. arXiv preprint arXiv:1803.07133 (2018)

  24. Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. JMLR (2008)

    Google Scholar 

  25. Mihalcea, R., Tarau, P.: Textrank: bringing order into text. In: EMNLP (2004)

    Google Scholar 

  26. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: NeurIPS (2013)

    Google Scholar 

  27. Miyato, T., Koyama, M.: cGANs with projection discriminator. In: ICLR (2018)

    Google Scholar 

  28. Moryossef, A., Goldberg, Y., Dagan, I.: Step-by-step: separating planning from realization in neural data-to-text generation. In: NAACL (2019)

    Google Scholar 

  29. Nie, W., Narodytska, N., Patel, A.: Relgan: relational generative adversarial networks for text generation. In: ICLR (2018)

    Google Scholar 

  30. Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier GANs. In: ICML (2017)

    Google Scholar 

  31. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: ACL (2002)

    Google Scholar 

  32. Pullum, G.K.: The land of the free and the elements of style. English Today (2010)

    Google Scholar 

  33. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners (2019)

    Google Scholar 

  34. Rose, S., Engel, D., Cramer, N., Cowley, W.: Automatic keyword extraction from individual documents. Text Mining: Appl. Theory (2010)

    Google Scholar 

  35. Serban, I.V., Sordoni, A., Bengio, Y., Courville, A.C., Pineau, J.: Building end-to-end dialogue systems using generative hierarchical neural network models. In: AAAI (2016)

    Google Scholar 

  36. Shen, D., et al.: Towards generating long and coherent text with multi-level latent variable models. In: ACL (2019)

    Google Scholar 

  37. Vaswani, A., et al.: Attention is all you need. In: NeurIPS (2017)

    Google Scholar 

  38. Yang, Z., Chen, W., Wang, F., Xu, B.: Improving neural machine translation with conditional sequence generative adversarial nets. In: NAACL-HLT (2018)

    Google Scholar 

  39. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet: generalized autoregressive pretraining for language understanding. In: NeurIPS (2019)

    Google Scholar 

  40. Yu, L., Zhang, W., Wang, J., Yu, Y.: Seqgan: sequence generative adversarial nets with policy gradient. In: AAAI (2017)

    Google Scholar 

  41. Zhang, H., Xu, T., Li, H.: Stackgan: text to photo-realistic image synthesis with stacked generative adversarial networks. In: ICCV (2017)

    Google Scholar 

  42. Zhang, S., Dinan, E., Urbanek, J., Szlam, A., Kiela, D., Weston, J.: Personalizing dialogue agents: I have a dog, do you have pets too? In: ACL (2018)

    Google Scholar 

Download references

Acknowledgement

This work was partially supported by the National Natural Science Foundation of China under Grant No. 61872369 and 61832017, Beijing Academy of Artificial Intelligence (BAAI) under Grant No. BAAI2020ZJ0301, Beijing Outstanding Young Scientist Program under Grant No. BJJWZYJH012019100020098.

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tang, T., Li, J., Zhao, W.X., Wen, JR. (2021). Generating Long and Coherent Text with Multi-Level Generative Adversarial Networks. In: U, L.H., Spaniol, M., Sakurai, Y., Chen, J. (eds) Web and Big Data. APWeb-WAIM 2021. Lecture Notes in Computer Science(), vol 12859. Springer, Cham. https://doi.org/10.1007/978-3-030-85899-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85899-5_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85898-8

  • Online ISBN: 978-3-030-85899-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics