Abstract
We studied the problem of the computer-based generation of a well-coherent story. In this study, we propose a model based on a conditioned variational autoencoder that takes the first and final sentences of the story as input and generates the story complementarily. One model concatenates sentences generated forward from the story’s first sentence as well as sentences generated backward from the final sentence at appropriate positions. The other model also considers information of the final sentence in generating sentences forward from the first sentence of the story. To evaluate the generated story, we used the story coherence evaluation model based on the general-purpose language model newly developed for this study, instead of the conventional evaluation metric that compares the generated story with the ground truth. We show that the proposed method can generate a more coherent story.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Serban, I.V., et al.: A hierarchical latent variable encoder-decoder model for generating dialogues. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pp.3295–3301 (2017)
Park, Y., Cho, J., Kim, G.: A hierarchical latent structure for variational conversation modeling. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.1792–1801 (2018)
Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. In: 2nd International Conference on Learning Representations (2014)
Serban, I.V., Sordoni, A., Bengio, Y., Courville, A., Pineau, J.: Building end-to-end dialogue systems using generative hierarchical neural network models. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pp.3776–3783 (2016)
Fan, A., Lewis, M., Dauphin, Y.: Hierarchical neural story generation. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pp.889–898 (2018)
Yao, L., Peng, N., Weischedel, R., Knight, K., Zhao, D., Yan, R.: Plan-and-write: towards better automatic storytelling. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp.7378–7385 (2019)
Roemmele, M., Kobayashi, S., Inoue, N., Gordon, A.: An RNN-based binary classifier for the story cloze test. In: Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-Level Semantics, pp.74–80 (2017)
Gupta, P., Kumar, V.B., Bhutani, M., Black, A.W.: WriterForcing: generating more interesting story endings. In: Proceedings of the Second Workshop on Storytelling, pp. 117–126 (2019)
Chen, G., Liu, Y., Luan, H., Zhang, M., Liu, Q., Sun, M.: Learning to predict explainable plots for neural story generation. arXiv preprint arXiv:1912.02395 (2019)
Zhai, F., Demberg, V., Shkadzko, P., Shi, W., Sayeed, A.: A hybrid model for globally coherent story generation. In: Proceedings of the Second Workshop on Storytelling of the Association for Computational Linguistics, pp. 34–45 (2019)
Zhang, T., Kishore, V., Wu, F., Weinberger, K.Q., Artzi, Y.: BERTScore: evaluating text generation with BERT. In: International Conference on Learning Representations (2020)
Mostafazadeh, N., et al.: A corpus and cloze evaluation for deeper understanding of commonsense stories. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 839–849 (2016)
Bird, S., Loper, E., Klein, E.: Natural Language Processing with Python. O’Reilly Media Inc., Newton (2009)
Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
Acknowledgments
This work was supported by JSPS KAKENHI Grant, Grant-in-Aid for Scientific Research(B), 19H04184. This work was also supported by JSPS KAKENHI Grant, Grant-in-Aid for Scientific Research(C), 20K11958.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Iikura, R., Okada, M., Mori, N. (2022). CVAE-Based Complementary Story Generation Considering the Beginning and Ending. In: Matsui, K., Omatu, S., Yigitcanlar, T., González, S.R. (eds) Distributed Computing and Artificial Intelligence, Volume 1: 18th International Conference. DCAI 2021. Lecture Notes in Networks and Systems, vol 327. Springer, Cham. https://doi.org/10.1007/978-3-030-86261-9_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-86261-9_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86260-2
Online ISBN: 978-3-030-86261-9
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)