Abstract
Despite the success of prompt learning-based models in text generation tasks, they still suffer from the introduction of external commonsense knowledge, especially from biased knowledge introduction. In this work, we propose KiProL, a knowledge-injected prompt learning framework to improve language generation and training efficiency. KiProL tackles ineffective learning and utilization of knowledge, reduces the biased knowledge introduction, as well as high training expenses. Then, we inject the recommended knowledge into the prompt learning encoder to optimize guiding prefixes without modifying the pre-trained model’s parameters, resulting in reduced computational expenses and shorter training duration. Our experiments on two publicly available datasets (i.e., Explanation Generation and Story Ending Generation) show that KiProL outperforms baseline models. It improves fluency by an average of 2%, while diversity increases by 3.4% when compared with advanced prompt learning-based methods. Additionally, KiProL is 45% faster than the state-of-the-art knowledgeable, prompt learning method in training efficiency.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Radford, A., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)
Alabi, J.O., Adelani, D.I., et al.: Adapting pre-trained language models to African languages via multilingual adaptive fine-tuning. In: Proceedings of the 29th International Conference on Computational Linguistics, pp. 4336–4349 (2022)
Li, X.L., Liang, P.: Prefix-tuning: optimizing continuous prompts for generation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, pp. 4582–4597 (2021)
Zhu, C., Xu, Y., Ren, X., Lin, B.Y., Jiang, M., Yu, W.: Knowledge-augmented methods for natural language processing. In: Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, pp. 1228–1231 (2023)
Ji, H., Ke, P., Huang, S., Wei, F., Zhu, X., Huang, M.: Language generation with multi-hop reasoning on commonsense knowledge graph. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 725–736 (2020)
Zhang, H., Liu, Z., Xiong, C., et al.: Grounded conversation generation as guided traverses in commonsense knowledge graphs. In: The 58th Annual Meeting of the Association for Computational Linguistics, pp. 2031–2043 (2020)
Zhong, P., Liu, Y., et al.: Keyword-guided neural conversational model. In: AAAI Conference on Artificial Intelligence, vol. 35, pp. 14568–14576 (2021)
Zheng, C., Huang, M.: Exploring prompt-based few-shot learning for grounded dialog generation. arXiv preprint arXiv:2109.06513 (2021)
Chen, L., Zhang, G., Zhou, H.: Fast greedy map inference for determinantal point process to improve recommendation diversity. In: International Conference on Neural Information Processing Systems, pp. 5627–5638 (2018)
Wang, C., Liang, S., Zhang, Y., Li, X., Gao, T.: Does it make sense? and why? A pilot study for sense making and explanation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4020–4026 (2019)
Mostafazadeh, N., Chambers, N., et al.: A corpus and cloze evaluation for deeper understanding of commonsense stories. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 839–849 (2016)
Speer, R., Chin, J., et al.: ConceptNet 5.5: an open multilingual graph of general knowledge. In: AAAI Conference on Artificial Intelligence, pp. 4444–4451 (2017)
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, pp. 3104–3112 (2014)
Tang, T., Li, J., Zhao, W.X., Wen, J.R.: Context-tuning: learning contextualized prompts for natural language generation. In: Proceedings of the 29th International Conference on Computational Linguistics, pp. 6340–6354 (2022)
Pang, B., Nijkamp, E., et al.: Towards holistic and automatic evaluation of open-domain dialogue generation. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3619–3629 (2020)
Acknowledgment
This work was supported in part by the National Key Research and Development Program of China under grant 2022YFF0902701, the National Natural Science Foundation of China under grant U21A20468, 61921003, 61972043, U22A201339, 62202065 and Zhejiang Lab under Grant 2021PD0AB02, the Key R D Program of Zhejiang under grant 2022C04006, the Fundamental Research Funds for the Central Universities under Grant 2020XD-A07-1.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhao, Y., Huang, Y., Cheng, B. (2024). KiProL: A Knowledge-Injected Prompt Learning Framework for Language Generation. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14650. Springer, Singapore. https://doi.org/10.1007/978-981-97-2266-2_6
Download citation
DOI: https://doi.org/10.1007/978-981-97-2266-2_6
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-2265-5
Online ISBN: 978-981-97-2266-2
eBook Packages: Computer ScienceComputer Science (R0)