Skip to main content

Representing Scoring Rubrics as Graphs for Automatic Short Answer Grading

  • Conference paper
  • First Online:
Artificial Intelligence in Education (AIED 2022)

Abstract

To score open ended responses, researchers often design a scoring rubric. Rubrics can help produce more consistent ratings and reduce bias. This project explores whether an automated short answer grading model can learn information from a scoring rubric to produce ratings closer to that of a human. We explore the impact of adding an additional transformer encoder layer to a BERT model and training the weights of this extra layer with only the scoring rubric text. Additionally, we experiment with using Node2Vec sampling to capture the graph-like ordinal structure in the rubric text to further pre-train the model. Results show superior model performance when further pre-training with the scoring rubric text. Specifically, questions that elicit a very simple rubric structure show the most improvement from incorporating rubric text. Using Node2Vec to capture the structure of the text had an inconclusive impact.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Al-Absi, M.: The effect of open-ended tasks-as an assessment tool-on fourth graders’ mathematics achievement, and assessing students’ perspectives about it. Jordan J. Educ. Sci. 9(3), 345–351 (2013)

    Google Scholar 

  2. Anandakrishnan, P., Raj, N., Nair, M.S., Sreekumar, A., Narayanan, J.: Automated short-answer scoring using latent semantic indexing. In: Smart Computing, pp. 298–307 (2021). https://doi.org/10.1201/9781003167488-35

  3. Arneson, A., Wihardini, D., Wilson, M.: Assessing college-ready data-based reasoning. In: Quantitative Measures of Mathematical Knowledge, pp. 93–120 (2019)

    Google Scholar 

  4. Bailey, S., Meurers, D.: Diagnosing meaning errors in short answers to reading comprehension questions. In: Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications, pp. 107–115 (2008)

    Google Scholar 

  5. Bertsch, S., Pesta, B.J., Wiscott, R., McDaniel, M.A.: The generation effect: a meta-analytic review. Memory Cogn. 35(2), 201–210 (2007)

    Article  Google Scholar 

  6. Burrows, S., Gurevych, I., Stein, B.: The eras and trends of automatic short answer grading. Int. J. Artif. Intell. Educ. 25(1), 60–117 (2015)

    Article  Google Scholar 

  7. Camus, L., Filighera, A.: Investigating transformers for automatic short answer grading. In: Bittencourt, I.I., Cukurova, M., Muldner, K., Luckin, R., Millán, E. (eds.) AIED 2020. LNCS (LNAI), vol. 12164, pp. 43–48. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52240-7_8

    Chapter  Google Scholar 

  8. Chen, S., Li, L.: Incorporating question information to enhance the performance of automatic short answer grading. In: Qiu, H., Zhang, C., Fei, Z., Qiu, M., Kung, S.-Y. (eds.) KSEM 2021. LNCS (LNAI), vol. 12817, pp. 124–136. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-82153-1_11

    Chapter  Google Scholar 

  9. Chi, M.T., De Leeuw, N., Chiu, M.H., LaVancher, C.: Eliciting self-explanations improves understanding. Cogn. Sci. 18(3), 439–477 (1994)

    Google Scholar 

  10. Condor, A., Litster, M., Pardos, Z.: Automatic short answer grading with SBERT on out-of-sample questions. In: Proceedings of the 14th International Conference on Educational Data Mining (2021)

    Google Scholar 

  11. Danielewicz, J., Elbow, P.: A unilateral grading contract to improve learning and teaching. Coll. Compos. Commun. 61, 244–268 (2009)

    Google Scholar 

  12. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  13. Dietterich, T.G.: Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput. 10(7), 1895–1923 (1998). https://doi.org/10.1162/089976698300017197

    Article  Google Scholar 

  14. Ecclestone, K.: ‘i know a 2: 1 when i see it’: understanding criteria for degree classifications in franchised university programmes. J. Further High. Educ. 25(3), 301–313 (2001)

    Article  Google Scholar 

  15. Gaddipati, S.K., Nair, D., Plöger, P.G.: Comparative evaluation of pretrained transfer learning models on automatic short answer grading. arXiv preprint arXiv:2009.01303 (2020)

  16. Gerard, L.F., Ryoo, K., McElhaney, K.W., Liu, O.L., Rafferty, A.N., Linn, M.C.: Automated guidance for student inquiry. J. Educ. Psychol. 108(1), 60 (2016)

    Article  Google Scholar 

  17. Grover, A., Leskovec, J.: node2vec: scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 855–864 (2016)

    Google Scholar 

  18. Hagberg, A., Swart, P., Chult, D.S.: Exploring network structure, dynamics, and function using networkX. Technical report, Los Alamos National Lab (LANL), Los Alamos, NM (United States) (2008)

    Google Scholar 

  19. Hancock, C.L.: Implementing the assessment standards for school mathematics: enhancing mathematics learning with open-ended questions. Math. Teach. 88(6), 496–499 (1995)

    Article  Google Scholar 

  20. Hou, W.J., Tsao, J.H.: Automatic assessment of students’ free-text answers with different levels. Int. J. Artif. Intell. Tools 20(02), 327–347 (2011)

    Article  Google Scholar 

  21. Jordan, S.: Investigating the use of short free text questions in online assessment. Final project report, Centre for the Open Learning of Mathematics, Science, Computing and Technology, The Open University, Milton Keynes, United Kingdom (2009)

    Google Scholar 

  22. Linn, M.C., Gerard, L., Ryoo, K., McElhaney, K., Liu, O.L., Rafferty, A.N.: Computer-guided inquiry to improve science learning. Science 344(6180), 155–156 (2014)

    Article  Google Scholar 

  23. Liu, T., Ding, W., Wang, Z., Tang, J., Huang, G.Y., Liu, Z.: Automatic short answer grading via multiway attention networks. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds.) AIED 2019. LNCS (LNAI), vol. 11626, pp. 169–173. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-23207-8_32

    Chapter  Google Scholar 

  24. Madnani, N., Burstein, J., Sabatini, J., O’Reilly, T.: Automated scoring of summary-writing tasks designed to measure reading comprehension. Grantee Submission (2013)

    Google Scholar 

  25. Mohler, M., Mihalcea, R.: Text-to-text semantic similarity for automatic short answer grading. In: Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pp. 567–575 (2009)

    Google Scholar 

  26. Qi, H., Wang, Y., Dai, J., Li, J., Di, X.: Attention-based hybrid model for automatic short answer scoring. In: Song, H., Jiang, D. (eds.) SIMUtools 2019. LNICST, vol. 295, pp. 385–394. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32216-8_37

    Chapter  Google Scholar 

  27. Rajagede, R.A., Hastuti, R.P.: Stacking neural network models for automatic short answer scoring. IOP Conf. Ser. Mater. Sci. Eng. 1077(1), 012013 (2021). https://doi.org/10.1088/1757-899x/1077/1/012013

    Article  Google Scholar 

  28. Sung, C., Dhamecha, T., Saha, S., Ma, T., Reddy, V., Arora, R.: Pre-training BERT on domain resources for short answer grading. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 6071–6075 (2019)

    Google Scholar 

  29. Sung, C., Dhamecha, T.I., Mukhi, N.: Improving short answer grading using transformer-based pre-training. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds.) AIED 2019. LNCS (LNAI), vol. 11625, pp. 469–481. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-23204-7_39

    Chapter  Google Scholar 

  30. Tulu, C.N., Ozkaya, O., Orhan, U.: Automatic short answer grading with SemSpace sense vectors and MaLSTM. IEEE Access 9, 19270–19280 (2021)

    Article  Google Scholar 

  31. Wainer, H.: Measurement problems. J. Educ. Meas. 30(1), 1–21 (1993)

    Article  Google Scholar 

  32. Waltman, K., Kahn, A., Koency, G.: Alternative approaches to scoring: the effects of using different scoring methods on the validity of scores from a performance assessment. Center for the Study of Evaluation, National Center for Research on Evaluation, Standards, and Student Testing (1998)

    Google Scholar 

  33. Wang, T., Funayama, H., Ouchi, H., Inui, K.: Data augmentation by rubrics for short answer grading. J. Nat. Lang. Process. 28(1), 183–205 (2021). https://doi.org/10.5715/jnlp.28.183

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Aubrey Condor , Zachary Pardos or Marcia Linn .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Condor, A., Pardos, Z., Linn, M. (2022). Representing Scoring Rubrics as Graphs for Automatic Short Answer Grading. In: Rodrigo, M.M., Matsuda, N., Cristea, A.I., Dimitrova, V. (eds) Artificial Intelligence in Education. AIED 2022. Lecture Notes in Computer Science, vol 13355. Springer, Cham. https://doi.org/10.1007/978-3-031-11644-5_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-11644-5_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-11643-8

  • Online ISBN: 978-3-031-11644-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics