Abstract
Automated essay comment generation is important for writing in education, as it can reduce the burden on teachers while providing students with rapid feedback to improve their writing. When writing, students tend to take writing thought ideas in several excellent essays as references, which inspire them to be more creative in writing. For teachers, they can refer to them to make suggestions on how to improve the quality of essays. Inspired by this behaviour of writing psychology, we guide the Large Language Model (LLM) to also imitate this behaviour. In this paper, we apply the essay comment generation task in application, which aims to generate comments on how to amplify the writing thought ideas for the given English expository essay. To tackle this task, we propose the two-stage comment generation framework, in which we first search for some cases of the chain of writing thought ideas, and then use them as evidence to guide the LLM to learn from these references and generate more concrete comments. What’s more, we manually collect some English expository essays to build the knowledge base and some essay-comment pairs (The source code is available at https://github.com/CarryCKW/EssayComGen). Extensive experiments show that our method outperforms strong baselines significantly.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Attali, Y., Burstein, J.: Automated essay scoring with e-rater® v. 2. J. Technol. Learn. Assess. 4(3) (2006)
Chowdhery, A., et al.: Palm: scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 (2022)
Deumert, A.: Mimesis and mimicry in language–creativity and aesthetics as the performance of (dis-) semblances (2018)
Guéguen, N., Martin, A., Meineri, S.: Mimicry and helping behavior: an evaluation of mimicry on explicit helping request. J. Soc. Psychol. 151(1), 1–4 (2011)
He, J., Zhou, C., Ma, X., Berg-Kirkpatrick, T., Neubig, G.: Towards a unified view of parameter-efficient transfer learning. arXiv preprint arXiv:2110.04366 (2021)
Houlsby, N., et al.: Parameter-efficient transfer learning for NLP. In: International Conference on Machine Learning, pp. 2790–2799. PMLR (2019)
Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.: A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055 (2015)
Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. ACM Comput. Surv. 55(9), 1–35 (2023)
Manakul, P., Liusie, A., Gales, M.J.: Selfcheckgpt: zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896 (2023)
Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BleU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318 (2002)
Power, R., Scott, D., Bouayad-Agha, N.: Document structure. Comput. Linguist. 29(2), 211–260 (2003)
Rohrbach, A., Hendricks, L.A., Burns, K., Darrell, T., Saenko, K.: Object hallucination in image captioning. arXiv preprint arXiv:1809.02156 (2018)
Thoppilan, R., et al.: Lamda: language models for dialog applications. arXiv preprint arXiv:2201.08239 (2022)
Tsai, C.T., Chen, J.J., Yang, C.Y., Chang, J.S.: Lingglewrite: a coaching system for essay writing. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 127–133 (2020)
Wang, B., et al.: Towards understanding chain-of-thought prompting: an empirical study of what matters. arXiv preprint arXiv:2212.10001 (2022)
Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. In: Advances in Neural Information Processing Systems, vol. 35, pp. 24824–24837 (2022)
Zhang, Y., Yu, X., Cui, Z., Wu, S., Wen, Z., Wang, L.: Every document owns its structure: inductive text classification via graph neural networks. arXiv preprint arXiv:2004.13826 (2020)
Zhang, Z., Guan, J., Xu, G., Tian, Y., Huang, M.: Automatic comment generation for Chinese student narrative essays. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 214–223 (2022)
Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493 (2022)
Acknowledgement
This work is supported by the Postgraduate Research & Practice Innovation Program of Jiangsu Province (KYCX23_1775) under Grant 181200003023202. The project name is the “Research and Implementation of Automated Essay Scoring”.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Cai, K., Zhou, J., Kong, L., Liang, D., Li, X. (2024). Automated Comment Generation Based on the Large Language Model. In: Hong, W., Kanaparan, G. (eds) Computer Science and Education. Computer Science and Technology. ICCSE 2023. Communications in Computer and Information Science, vol 2023. Springer, Singapore. https://doi.org/10.1007/978-981-97-0730-0_25
Download citation
DOI: https://doi.org/10.1007/978-981-97-0730-0_25
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-0729-4
Online ISBN: 978-981-97-0730-0
eBook Packages: Computer ScienceComputer Science (R0)