Abstract
The assessment of Chinese essays with respect to text coherence using deep learning has been relatively understudied due to the lack of large-scale, high-quality discourse coherence evaluation data resources. Existing research predominantly focuses on characters, words, and sentences, neglecting automatic evaluation of Chinese essays based on articles’ coherence. This paper aims to research automatic evaluation of Chinese essays based on articles’ coherence by leveraging some data from LEssay, a Chinese essay coherence evaluation dataset jointly constructed by the CubeNLP laboratory of East China Normal University and Microsoft. The coherence of Chinese essays is primarily evaluated based on two big aspects: 1. The smoothness of logic (the appropriateness of using related words, and the appropriateness of logical relationship between contexts) 2. The reasonableness of sentence breaks (how well punctuation is used and how well the sentence is structured). Therefore, in this paper, we adopt prompt learning and cleverly design a multi-angle prediction prompt template that can realize the assessment of the coherence of Chinese essay from four angles. During the inference stage, the prediction of the coherence of Chinese essays is obtained through the multi-angle prediction template and voting mechanism. Notably, the proposed method demonstrates excellent results in the NLPCC2023 SharedTask7 Track1.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
Chen, X., et al.: Adaprompt: adaptive prompt-based finetuning for relation extraction. arXiv preprint arXiv:2104.07650 (2021)
Davison, J., Feldman, J., Rush, A.M.: Commonsense knowledge mining from pretrained models. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 1173–1178 (2019)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Han, X., Zhao, W., Ding, N., Liu, Z., Sun, M.: Ptr: prompt tuning with rules for text classification. AI Open 3, 182–192 (2022)
Hu, S., et al.: Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. arXiv preprint arXiv:2108.02035 (2021)
Jiang, Z., Xu, F.F., Araki, J., Neubig, G.: How can we know what language models know? Trans. Assoc. Comput. Linguistics 8, 423–438 (2020)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. ACM Comput. Surv. 55(9), 1–35 (2023)
Liu, X., et al.: GPT understands, too. arXiv preprint arXiv:2103.10385 (2021)
Petroni, F., Rocktäschel, T., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.H., Riedel, S.: Language models as knowledge bases? arXiv preprint arXiv:1909.01066 (2019)
Qin, G., Eisner, J.: Learning how to ask: querying LMS with mixtures of soft prompts. arXiv preprint arXiv:2104.06599 (2021)
Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(1), 5485–5551 (2020)
Schick, T., Schütze, H.: Exploiting cloze questions for few shot text classification and natural language inference. arXiv preprint arXiv:2001.07676 (2020)
Vaswani, A., et al.: Attention is all you need. Advances in neural information processing systems 30 (2017)
Acknowledgments
This work was supported by Improvement of Innovation Ability of Small and Medium Sci-tech Enterprises Program (No. 2023TSGC0182) and Tai Shan Industry Leading Talent Project.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ju, Z., Li, Z., Wu, S., Zhao, X., Zhan, Y. (2023). Multi-angle Prediction Based on Prompt Learning for Text Classification. In: Liu, F., Duan, N., Xu, Q., Hong, Y. (eds) Natural Language Processing and Chinese Computing. NLPCC 2023. Lecture Notes in Computer Science(), vol 14304. Springer, Cham. https://doi.org/10.1007/978-3-031-44699-3_25
Download citation
DOI: https://doi.org/10.1007/978-3-031-44699-3_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44698-6
Online ISBN: 978-3-031-44699-3
eBook Packages: Computer ScienceComputer Science (R0)