Skip to main content

CVAE-Based Complementary Story Generation Considering the Beginning and Ending

  • Conference paper
  • First Online:
Distributed Computing and Artificial Intelligence, Volume 1: 18th International Conference (DCAI 2021)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 327))

  • 412 Accesses

Abstract

We studied the problem of the computer-based generation of a well-coherent story. In this study, we propose a model based on a conditioned variational autoencoder that takes the first and final sentences of the story as input and generates the story complementarily. One model concatenates sentences generated forward from the story’s first sentence as well as sentences generated backward from the final sentence at appropriate positions. The other model also considers information of the final sentence in generating sentences forward from the first sentence of the story. To evaluate the generated story, we used the story coherence evaluation model based on the general-purpose language model newly developed for this study, instead of the conventional evaluation metric that compares the generated story with the ground truth. We show that the proposed method can generate a more coherent story.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Serban, I.V., et al.: A hierarchical latent variable encoder-decoder model for generating dialogues. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pp.3295–3301 (2017)

    Google Scholar 

  2. Park, Y., Cho, J., Kim, G.: A hierarchical latent structure for variational conversation modeling. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.1792–1801 (2018)

    Google Scholar 

  3. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. In: 2nd International Conference on Learning Representations (2014)

    Google Scholar 

  4. Serban, I.V., Sordoni, A., Bengio, Y., Courville, A., Pineau, J.: Building end-to-end dialogue systems using generative hierarchical neural network models. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pp.3776–3783 (2016)

    Google Scholar 

  5. Fan, A., Lewis, M., Dauphin, Y.: Hierarchical neural story generation. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pp.889–898 (2018)

    Google Scholar 

  6. Yao, L., Peng, N., Weischedel, R., Knight, K., Zhao, D., Yan, R.: Plan-and-write: towards better automatic storytelling. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp.7378–7385 (2019)

    Google Scholar 

  7. Roemmele, M., Kobayashi, S., Inoue, N., Gordon, A.: An RNN-based binary classifier for the story cloze test. In: Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-Level Semantics, pp.74–80 (2017)

    Google Scholar 

  8. Gupta, P., Kumar, V.B., Bhutani, M., Black, A.W.: WriterForcing: generating more interesting story endings. In: Proceedings of the Second Workshop on Storytelling, pp. 117–126 (2019)

    Google Scholar 

  9. Chen, G., Liu, Y., Luan, H., Zhang, M., Liu, Q., Sun, M.: Learning to predict explainable plots for neural story generation. arXiv preprint arXiv:1912.02395 (2019)

  10. Zhai, F., Demberg, V., Shkadzko, P., Shi, W., Sayeed, A.: A hybrid model for globally coherent story generation. In: Proceedings of the Second Workshop on Storytelling of the Association for Computational Linguistics, pp. 34–45 (2019)

    Google Scholar 

  11. Zhang, T., Kishore, V., Wu, F., Weinberger, K.Q., Artzi, Y.: BERTScore: evaluating text generation with BERT. In: International Conference on Learning Representations (2020)

    Google Scholar 

  12. Mostafazadeh, N., et al.: A corpus and cloze evaluation for deeper understanding of commonsense stories. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 839–849 (2016)

    Google Scholar 

  13. Bird, S., Loper, E., Klein, E.: Natural Language Processing with Python. O’Reilly Media Inc., Newton (2009)

    Google Scholar 

  14. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

Download references

Acknowledgments

This work was supported by JSPS KAKENHI Grant, Grant-in-Aid for Scientific Research(B), 19H04184. This work was also supported by JSPS KAKENHI Grant, Grant-in-Aid for Scientific Research(C), 20K11958.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Riku Iikura .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Iikura, R., Okada, M., Mori, N. (2022). CVAE-Based Complementary Story Generation Considering the Beginning and Ending. In: Matsui, K., Omatu, S., Yigitcanlar, T., González, S.R. (eds) Distributed Computing and Artificial Intelligence, Volume 1: 18th International Conference. DCAI 2021. Lecture Notes in Networks and Systems, vol 327. Springer, Cham. https://doi.org/10.1007/978-3-030-86261-9_3

Download citation

Publish with us

Policies and ethics