Skip to main content

A Survey of Deep Learning Applied to Story Generation

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11910))

  • The original version of this chapter was revised: the last name of the author Sisi Xuanyuan had been misspelt misspelt. This has been corrected. The correction to this chapter is available at https://doi.org/10.1007/978-3-030-34139-8_41

Abstract

With the rapid development of deep learning in the fields of text abstraction and dialogue generation, researchers are now reconsidering the long-standing story-generation task from the 1970s. Deep learning methods are gradually being adopted to solve problems in traditional story generation, making story generation a new research hotspot in the field of text generation. However, in the field of story generation, the widely used seq2seq model is unable to provide adequate long-distance text modeling. As a result, this model struggles to solve the story-generation task, since the relation between long text should be considered, and coherency and vividness are critical. Thus, recent years have seen numerous proposals for better modeling methods. In this paper, we present the results of a comprehensive study of story generation. We first introduce the relevant concepts of story generation, its background, and the current state of research. We then summarize and analyze the standard methods of story generation. Based on various divisions of user constraints, the story-generation methods are divided into three categories: the theme-oriented model, the storyline-oriented model, and the human-machine-interaction–oriented model. On this basis, we discuss the basic ideas and main concerns of various methods and compare the strengths and weaknesses of each method. Finally, we finish by analyzing and forecasting future developments that could push story-generation research toward a new frontier.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Change history

  • 04 November 2019

    In the version of this paper that was originally published, the last name of the author Sisi Xuanyuan was misspelt. This has been corrected.

References

  1. Meehan, J.R.: TALE-SPIN, an interactive program that writes stories. IJCAI 77, 91–98 (1977)

    Google Scholar 

  2. Zhu, F., Cungen, C.: A survey of narrative generation approaches. J. Chin. Inf. Process. 27(3), 33–40 (2013)

    Google Scholar 

  3. Cavazza, M., Charles, F., Mead, S.J.: Character-based interactive storytelling. IEEE Intell. Syst. 17(4), 17–24 (2002)

    Article  Google Scholar 

  4. Pizzi, D.: Emotional planning for character-based interactive storytelling. Teesside University (2011)

    Google Scholar 

  5. Lebowitz, M.: Story-telling as planning and learning. Poetics 14(6), 483–502 (1985)

    Article  Google Scholar 

  6. Turner, S.R.: MINSTREL: a computer model of creativity and storytelling, pp. 1505–1505 (1994)

    Google Scholar 

  7. Bringsjord, S., Ferrucci, D.: Artificial Intelligence and Literary Creativity: Inside the Mind of Brutus, a Storytelling Machine. Psychology Press, Abingdon (1999)

    Book  Google Scholar 

  8. Perez, R.P.Ý., Sharples, M.: MEXICA: a computer model of a cognitive account of creative writing. J. Exp. Theor. Artif. Intell. 13(2), 119–139 (2001)

    Article  Google Scholar 

  9. Riedl, M.O., Young, R.M.: Narrative planning: balancing plot and character. J. Artif. Intell. Res. 39(1), 217–268 (2010)

    Article  Google Scholar 

  10. Gervas, P., et al.: Story plot generation based on CBR. Knowl. Based Syst. 18(4), 235–242 (2005)

    Article  Google Scholar 

  11. Swanson, R., Gordon, A.S.: Say anything: using textual case-based reasoning to enable open-domain interactive storytelling. Ksii Trans. Internet Inf. Syst. 2(3), 16 (2012)

    Google Scholar 

  12. Li, B., et al.: Story generation with crowdsourced plot graphs. In: National Conference on Artificial Intelligence, pp. 598–604 (2013)

    Google Scholar 

  13. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Neural Information Processing Systems, pp. 3104–3112 (2014)

    Google Scholar 

  14. Luong, T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: Empirical Methods in Natural Language Processing, pp. 1412–1421 (2015)

    Google Scholar 

  15. Martin, L.J., et al.: Event representations for automated story generation with deep neural nets. In: National Conference on Artificial Intelligence, pp. 868–875 (2018)

    Google Scholar 

  16. Li, J., et al.: Deep reinforcement learning for dialogue generation. In: Empirical Methods in Natural Language Processing, pp. 1192–1202 (2016)

    Google Scholar 

  17. Yao, L., et al: Plan-and-write: towards better automatic storytelling. In: National Conference on Artificial Intelligence (2019)

    Google Scholar 

  18. Guan, J., Wang, Y., Huang, M.: Story ending generation with incremental encoding and commonsense knowledge. In: National Conference on Artificial Intelligence (2019)

    Google Scholar 

  19. Tambwekar, P., et al.: Controllable neural story generation via reinforcement learning. arXiv: Computation and Language (2018)

    Google Scholar 

  20. Fan, A., Lewis, M., Dauphin, Y.N.: Hierarchical neural story generation. In: Meeting of the Association for Computational Linguistics, pp. 889–898 (2018)

    Google Scholar 

  21. Xu, J., et al.: A skeleton-based model for promoting coherence among sentences in narrative story generation.  In: Empirical Methods in Natural Language Processing, pp. 4306–4315 (2018)

    Google Scholar 

  22. Nayak, N., et al.: To plan or not to plan? Discourse planning in slot-value informed sequence to sequence models for language generation. In: Conference of the International Speech Communication Association, pp. 3339–3343 (2017)

    Google Scholar 

  23. Wang, Z., et al.: Chinese poetry generation with planning based neural network. In: International Conference on Computational Linguistics, pp. 1051–1060 (2016)

    Google Scholar 

  24. Mou, L., et al.: Sequence to backward and forward sequences: a content-introducing approach to generative short-text conversation. In: International Conference on Computational Linguistics, pp. 3349–3358 (2016)

    Google Scholar 

  25. Fan, A., Lewis, M., Dauphin, Y.: Strategies for structuring story generation. arXiv: Computation and Language (2019)

    Google Scholar 

  26. Huang, K., et al.: Visual storytelling. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1233–1239 (2016)

    Google Scholar 

  27. Wang, X., et al.: No metrics are perfect: adversarial reward learning for visual storytelling. In: Meeting of the Association for Computational Linguistics, pp. 899–909 (2018)

    Google Scholar 

  28. Kim, T., et al.: GLAC Net: GLocal attention cascading networks for multi-image cued story generation. arXiv: Computation and Language (2018)

    Google Scholar 

  29. Huang, Q., et al.: Hierarchically structured reinforcement learning for topically coherent visual story generation. In: National Conference on Artificial Intelligence (2019)

    Google Scholar 

  30. Li, J., Luong, T., Jurafsky, D.: A hierarchical neural autoencoder for paragraphs and documents. In: International Joint Conference on Natural Language Processing, pp. 1106–1115 (2015)

    Google Scholar 

  31. Yarats, D., Lewis, M.: Hierarchical text generation and planning for strategic dialogue. In: International Conference on Machine Learning, pp. 5587–5595 (2018)

    Google Scholar 

  32. Mostafazadeh, N., et al.: CaTeRS: causal and temporal relation scheme for semantic annotation of event structures. In: North American Chapter of the Association for Computational Linguistics, pp. 51–61 (2016)

    Google Scholar 

  33. Zhou, D., Guo, L., He, Y.: Neural storyline extraction model for storyline generation from news articles. In: North American Chapter of the Association for Computational Linguistics, pp. 1727–1736 (2018)

    Google Scholar 

  34. Jain, P., et al.: Story generation from sequence of independent short descriptions. arXiv: Computation and Language (2017)

    Google Scholar 

  35. Zhao, Y., et al.: From plots to endings: a reinforced pointer generator for story ending generation. In: International Conference Natural Language Processing, pp. 51–63 (2018)

    Chapter  Google Scholar 

  36. Clark, E.A., et al.: Creative writing with a machine in the loop: case studies on slogans and stories. In: Intelligent User Interfaces, pp. 329–340 (2018)

    Google Scholar 

  37. Peng, N., et al.: Towards controllable story generation. In: Proceedings of the First Workshop on Storytelling (2018)

    Google Scholar 

  38. Goldfarb-Tarrant, S., Feng, H., Peng, N.: Plan, write, and revise: an interactive system for open-domain story generation. arXiv: Computation and Language (2019)

    Google Scholar 

Download references

Acknowledgment

This work is supported by the National Key Research and Development Program of China (No. 2017YFB1400805).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinan Sun .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hou, C., Zhou, C., Zhou, K., Sun, J., Xuanyuan, S. (2019). A Survey of Deep Learning Applied to Story Generation. In: Qiu, M. (eds) Smart Computing and Communication. SmartCom 2019. Lecture Notes in Computer Science(), vol 11910. Springer, Cham. https://doi.org/10.1007/978-3-030-34139-8_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-34139-8_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-34138-1

  • Online ISBN: 978-3-030-34139-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics