Skip to main content

Automated Comment Generation Based on the Large Language Model

  • Conference paper
  • First Online:
Computer Science and Education. Computer Science and Technology (ICCSE 2023)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 2023))

Included in the following conference series:

  • 215 Accesses

Abstract

Automated essay comment generation is important for writing in education, as it can reduce the burden on teachers while providing students with rapid feedback to improve their writing. When writing, students tend to take writing thought ideas in several excellent essays as references, which inspire them to be more creative in writing. For teachers, they can refer to them to make suggestions on how to improve the quality of essays. Inspired by this behaviour of writing psychology, we guide the Large Language Model (LLM) to also imitate this behaviour. In this paper, we apply the essay comment generation task in application, which aims to generate comments on how to amplify the writing thought ideas for the given English expository essay. To tackle this task, we propose the two-stage comment generation framework, in which we first search for some cases of the chain of writing thought ideas, and then use them as evidence to guide the LLM to learn from these references and generate more concrete comments. What’s more, we manually collect some English expository essays to build the knowledge base and some essay-comment pairs (The source code is available at https://github.com/CarryCKW/EssayComGen). Extensive experiments show that our method outperforms strong baselines significantly.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://open.bigmodel.cn/.

  2. 2.

    https://huggingface.co/meta-llama/Llama-2-13b-chat-hf.

References

  1. Attali, Y., Burstein, J.: Automated essay scoring with e-rater® v. 2. J. Technol. Learn. Assess. 4(3) (2006)

    Google Scholar 

  2. Chowdhery, A., et al.: Palm: scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 (2022)

  3. Deumert, A.: Mimesis and mimicry in language–creativity and aesthetics as the performance of (dis-) semblances (2018)

    Google Scholar 

  4. Guéguen, N., Martin, A., Meineri, S.: Mimicry and helping behavior: an evaluation of mimicry on explicit helping request. J. Soc. Psychol. 151(1), 1–4 (2011)

    Article  Google Scholar 

  5. He, J., Zhou, C., Ma, X., Berg-Kirkpatrick, T., Neubig, G.: Towards a unified view of parameter-efficient transfer learning. arXiv preprint arXiv:2110.04366 (2021)

  6. Houlsby, N., et al.: Parameter-efficient transfer learning for NLP. In: International Conference on Machine Learning, pp. 2790–2799. PMLR (2019)

    Google Scholar 

  7. Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.: A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055 (2015)

  8. Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. ACM Comput. Surv. 55(9), 1–35 (2023)

    Article  Google Scholar 

  9. Manakul, P., Liusie, A., Gales, M.J.: Selfcheckgpt: zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896 (2023)

  10. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BleU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318 (2002)

    Google Scholar 

  11. Power, R., Scott, D., Bouayad-Agha, N.: Document structure. Comput. Linguist. 29(2), 211–260 (2003)

    Article  Google Scholar 

  12. Rohrbach, A., Hendricks, L.A., Burns, K., Darrell, T., Saenko, K.: Object hallucination in image captioning. arXiv preprint arXiv:1809.02156 (2018)

  13. Thoppilan, R., et al.: Lamda: language models for dialog applications. arXiv preprint arXiv:2201.08239 (2022)

  14. Tsai, C.T., Chen, J.J., Yang, C.Y., Chang, J.S.: Lingglewrite: a coaching system for essay writing. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 127–133 (2020)

    Google Scholar 

  15. Wang, B., et al.: Towards understanding chain-of-thought prompting: an empirical study of what matters. arXiv preprint arXiv:2212.10001 (2022)

  16. Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. In: Advances in Neural Information Processing Systems, vol. 35, pp. 24824–24837 (2022)

    Google Scholar 

  17. Zhang, Y., Yu, X., Cui, Z., Wu, S., Wen, Z., Wang, L.: Every document owns its structure: inductive text classification via graph neural networks. arXiv preprint arXiv:2004.13826 (2020)

  18. Zhang, Z., Guan, J., Xu, G., Tian, Y., Huang, M.: Automatic comment generation for Chinese student narrative essays. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 214–223 (2022)

    Google Scholar 

  19. Zhang, Z., Zhang, A., Li, M., Smola, A.: Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493 (2022)

Download references

Acknowledgement

This work is supported by the Postgraduate Research & Practice Innovation Program of Jiangsu Province (KYCX23_1775) under Grant 181200003023202. The project name is the “Research and Implementation of Automated Essay Scoring”.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junsheng Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cai, K., Zhou, J., Kong, L., Liang, D., Li, X. (2024). Automated Comment Generation Based on the Large Language Model. In: Hong, W., Kanaparan, G. (eds) Computer Science and Education. Computer Science and Technology. ICCSE 2023. Communications in Computer and Information Science, vol 2023. Springer, Singapore. https://doi.org/10.1007/978-981-97-0730-0_25

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0730-0_25

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0729-4

  • Online ISBN: 978-981-97-0730-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics