Abstract
Essay writing might reveal the language proficiency of a student. Utilizing intelligent technology to automatically grade essays is an effective method of saving significant manpower and time resources, and improving the accuracy of score. The present models typically rely on shallow semantic features, deep semantic features and multi-level semantic features. Existing models, however, struggle to be superior in both scoring accuracy and generalization performance. As a result, we propose a model that incorporates multi-level semantic features. Specifically, we manually define and automatically extracted the shallow semantic features; we use the BERT pre-training model, convolutional neural networks and recurrent neural networks to extract the deep semantic features; and last, feature fusion is used to score essay automatically. The proposed model outperforms three state-of-the-art baseline methods, according to experimental results on two datasets. Additionally, the generalization of the model has been greatly enhanced. The study has a significant impact on automated essay scoring theoretical investigations and practical applications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alikaniotis, D., Yannakoudakis, H., Rei, M.: Automated text scoring using neural networks. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pp. 715–725(2016)
Taghipour, K., Ng, H.T.: A neural approach to automated essay scoring. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP, pp. 1882–1891(2016)
Jin, C., He, B., Hui, K., Sun, L.: TDNN: a two-stage deep neural network for prompt-independent automated essay scoring. In: Proceeding of the 56th Annual Meeting of the Association for Computational Linguistics, pp. 1088–1097. Association for Computational Linguistics, Melbourne (2018)
Rodriguez, P U., Jafari, A., Ormerod, C.M.: Language models and Automated Essay Scoring. ArXiv, 1909.09482 (2019)
Fernandez, N., Ghosh, A., Liu, N., Wang, Z., Choffin, B., Baraniuk, R., Lan, A.: Automated scoring for reading comprehension via in-context BERT tuning. In: 23rd International Conference, pp. 691–697. Springer International Publishing (2022)
Dasgupta, T., Naskar, A., Dey, L., Saha, R.: Augmenting textual qualitative features in deep convolution recurrent neural network for automated essay scoring. In: Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pp. 93–102. Association for Computational Linguistics (2018)
Liu, J., Xu, Y., Zhu, Y.: Automated Essay Scoring based on Two-Stage Learning. ArXiv, 1901.0774 (2019)
Zhou, X., Fan, X., Ren, G., Yang, Y.: Automated English essay scoring method based on muti-level semantic features. J. Comput. Appl. 41(08), 2205–2211 (2021). (in Chinese)
Acknowledgements
This paper was supported by Graduate Education Reform Project of Beijing University of Posts and Telecommunications (2022Y004), High-performance Computing Platform of BUPT.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Li, J., Wu, J. (2023). Automated Essay Scoring Incorporating Multi-level Semantic Features. In: Wang, N., Rebolledo-Mendez, G., Dimitrova, V., Matsuda, N., Santos, O.C. (eds) Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky. AIED 2023. Communications in Computer and Information Science, vol 1831. Springer, Cham. https://doi.org/10.1007/978-3-031-36336-8_31
Download citation
DOI: https://doi.org/10.1007/978-3-031-36336-8_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-36335-1
Online ISBN: 978-3-031-36336-8
eBook Packages: Computer ScienceComputer Science (R0)