Abstract
Storytelling is a knowledge-driven task, which requires the model to associate relevant information given a context and organize them into a reasonable story. Some knowledge-enhanced storytelling models are proposed. However, there exist certain drawbacks in them, including needing better retrieval and selection strategy. Target on these, we propose a three-stage knowledge grounded storytelling model, which combines semantic knowledge retrieval, a knowledge selection and a story generation module. We build in-domain and open-domain knowledge, which is suitable for story generation. The knowledge selection and story generation modules are built in the readily available transformer-based framework. We devise two approaches to train the knowledge selection module. One is to leverage constructed pseudo labels. Another is to jointly train the knowledge selection and story generation modules, so as to leverage the supervision of the ground-truth story. We conduct experiments on the public ROCStories dataset, and the automatic and human evaluation demonstrates the effectiveness of our method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
For verbs or nouns, their relation can be organized into a tree, where the parent nodes are more abstract, and the children nodes are more specific.
- 3.
- 4.
- 5.
- 6.
References
Fan, A., Lewis, M., Dauphin, Y.: Hierarchical neural story generation. In: ACL (2018)
Fan, A., Lewis, M., Dauphin, Y.: Strategies for structuring story generation. arXiv (2019)
Ghazvininejad, M., et al.: A knowledge-grounded neural conversation model. In: AAAI (2018)
Guan, J., Huang, F., Zhao, Z., Zhu, X., Huang, M.: A knowledge-enhanced pretraining model for commonsense story generation. In: TACL (2020)
Guan, J., Wang, Y., Huang, M.: Story ending generation with incremental encoding and commonsense knowledge. In: AAAI (2019)
Guu, K., Lee, K., Tung, Z., Pasupat, P., Chang, M.W.: Realm: retrieval-augmented language model pre-training. arXiv (2020)
Hill, F., Bordes, A., Chopra, S., Weston, J.: The goldilocks principle: reading children’s books with explicit memory representations. arXiv (2015)
Huang, T.H., et al.: Visual storytelling. In: NAACL (2016)
Jain, P., Agrawal, P., Mishra, A., Sukhwani, M., Laha, A., Sankaranarayanan, K.: Story generation from sequence of independent short descriptions (2017)
Lewis, M., et al.: Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv (2019)
Lewis, P., et al.: Retrieval-augmented generation for knowledge-intensive NLP tasks. arXiv (2020)
Liu, C.W., Lowe, R., Serban, I.V., Noseworthy, M., Charlin, L., Pineau, J.: How not to evaluate your dialogue system: an empirical study of unsupervised evaluation metrics for dialogue response generation. In: EMNLP (2016)
Liu, D., et al.: A character-centric neural model for automated story generation. In: AAAI (2020)
Mostafazadeh, N., et al.: A corpus and evaluation framework for deeper understanding of commonsense stories. arXiv (2016)
Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: ACL (2002)
Radford, A., et al.: Language models are unsupervised multitask learners. OpenAI blog (2019)
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS (2014)
Tan, B., Yang, Z., AI-Shedivat, M., Xing, E.P., Hu, Z.: Progressive generation of long text with pretrained language models. arXiv (2020)
Wu, Z., et al.: A controllable model of grounded response generation. arXiv (2020)
Xu, P., et al.: Megatron-CNTRL: controllable story generation with external knowledge using large-scale language models. arXiv (2020)
Yao, L., Peng, N., Weischedel, R., Knight, K., Zhao, D., Yan, R.: Plan-and-write: towards better automatic storytelling. In: AAAI (2019)
Yu, M.H., et al.: Draft and edit: automatic storytelling through multi-pass hierarchical conditional variational autoencoder. In: AAAI (2020)
Acknowledgments
We thank the reviewers for their valuable comments. This work was supported by the National Key Research and Development Program of China (No. 2020AAA0106602).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Qin, W., Zhao, D. (2022). Retrieval, Selection and Writing: A Three-Stage Knowledge Grounded Storytelling Model. In: Lu, W., Huang, S., Hong, Y., Zhou, X. (eds) Natural Language Processing and Chinese Computing. NLPCC 2022. Lecture Notes in Computer Science(), vol 13551. Springer, Cham. https://doi.org/10.1007/978-3-031-17120-8_28
Download citation
DOI: https://doi.org/10.1007/978-3-031-17120-8_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-17119-2
Online ISBN: 978-3-031-17120-8
eBook Packages: Computer ScienceComputer Science (R0)