Skip to main content

Enhancing Instructors’ Capability to Assess Open-Response Using Natural Language Processing and Learning Analytics

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13450))

Abstract

Assessments are crucial to measuring student progress and providing constructive feedback. However, the instructors have a huge workload, which leads to the application of more superficial assessments that, sometimes, does not include the necessary questions and activities to evaluate the students adequately. For instance, it is well-known that open-ended questions and textual productions can stimulate students to develop critical thinking and knowledge construction skills, but this type of question requires much effort and time in the evaluation process. Previous works have focused on automatically scoring open-ended responses based on the similarity of the students’ answers with a reference solution provided by the instructor. This approach has its benefits and several drawbacks, such as the failure to provide quality feedback for students and the possible inclusion of negative bias in the activities assessment. To address these challenges, this paper presents a new approach that combines learning analytics and natural language processing methods to support the instructor in assessing open-ended questions. The main novelty of this paper is the replacement of the similarity analysis with a tag recommendation algorithm to automatically assign correct statements and errors already known to the responses, along with an explanation for each tag.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://tutor-ia.com/.

References

  1. Andriamiseza, R., Silvestre, F., Parmentier, J.-F., Broisin, J.: Recommendations for orchestration of formative assessment sequences: a data-driven approach. In: De Laet, T., Klemke, R., Alario-Hoyos, C., Hilliger, I., Ortega-Arranz, A. (eds.) EC-TEL 2021. LNCS, vol. 12884, pp. 245–259. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86436-1_19

    Chapter  Google Scholar 

  2. Barthakur, A., et al.: Understanding depth of reflective writing in workplace learning assessments using machine learning classification. IEEE Trans. Learn. Technol. (2022)

    Google Scholar 

  3. Bodily, R., et al.: Open learner models and learning analytics dashboards: a systematic review. In: Proceedings of the 8th International Conference on Learning Analytics and Knowledge, pp. 41–50 (2018)

    Google Scholar 

  4. Bonthu, S., Rama Sree, S., Krishna Prasad, M.H.M.: Automated short answer grading using deep learning: a survey. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2021. LNCS, vol. 12844, pp. 61–78. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-84060-0_5

    Chapter  Google Scholar 

  5. Brown, S.: Assessment for learning. Learn. Teach. High. Educ. 1, 81–89 (2005)

    Google Scholar 

  6. Camus, L., Filighera, A.: Investigating transformers for automatic short answer grading. In: Bittencourt, I.I., Cukurova, M., Muldner, K., Luckin, R., Millán, E. (eds.) AIED 2020. LNCS (LNAI), vol. 12164, pp. 43–48. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52240-7_8

    Chapter  Google Scholar 

  7. Cavalcanti, A.P., et al.: Automatic feedback in online learning environments: a systematic literature review. Comput. Educ.: Artif. Intell. 2, 100027 (2021)

    Google Scholar 

  8. Chowdhary, K.: Natural language processing. Fundam. Artif. Intell. 603–649 (2020)

    Google Scholar 

  9. Clow, D.: The learning analytics cycle: closing the loop effectively. In: Proceedings of the 2nd International Conference on Learning Analytics and Knowledge, pp. 134–138 (2012)

    Google Scholar 

  10. Cutrone, L.A., Chang, M.: Automarking: automatic assessment of open questions. In: 2010 10th IEEE International Conference on Advanced Learning Technologies, pp. 143–147. IEEE (2010)

    Google Scholar 

  11. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT, pp. 4171–4186 (2019). https://doi.org/10.18653/v1/N19-1423

  12. Dixson, D.D., Worrell, F.C.: Formative and summative assessment in the classroom. Theory Pract. 55(2), 153–159 (2016)

    Article  Google Scholar 

  13. Erickson, J.A., Botelho, A.: Is it fair? Automated open response grading. In: International Conference on Educational Data Mining (2021)

    Google Scholar 

  14. Erickson, J.A., Botelho, A.F., McAteer, S., Varatharaj, A., Heffernan, N.T.: The automated grading of student open responses in mathematics. In: Proceedings of the Tenth International Conference on Learning Analytics & Knowledge, pp. 615–624 (2020)

    Google Scholar 

  15. Fellbaum, C.: WordNet. In: Poli, R., Healy, M., Kameas, A. (eds.) Theory and Applications of Ontology: Computer Applications, pp. 231–243. Springer, Cham (2010). https://doi.org/10.1007/978-90-481-8847-5_10

    Chapter  Google Scholar 

  16. Ferreira-Mello, R., André, M., Pinheiro, A., Costa, E., Romero, C.: Text mining in education. Wiley Interdisc. Rev.: Data Min. Knowl. Discov. 9(6), e1332 (2019)

    Google Scholar 

  17. Freeman, R., Lewis, R.: Planning and Implementing Assessment. Routledge (2016)

    Google Scholar 

  18. Gibson, D., Ifenthaler, D.: Adoption of learning analytics. In: Ifenthaler, D., Gibson, D. (eds.) Adoption of Data Analytics in Higher Education Learning and Teaching. AALT, pp. 3–20. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-47392-1_1

    Chapter  Google Scholar 

  19. Hattie, J.: Visible Learning for Teachers: Maximizing Impact on Learning. Routledge (2012)

    Google Scholar 

  20. Hussein, M.A., Hassan, H., Nassef, M.: Automated language essay scoring systems: a literature review. PeerJ Comput. Sci. 5, e208 (2019)

    Article  Google Scholar 

  21. Lin, C.Y.: Rouge: a package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004)

    Google Scholar 

  22. Long, P.D., Siemens, G., Conole, G., Gašević, D. (eds.): Proceedings of the 1st International Conference on Learning Analytics and Knowledge (LAK 2011). ACM, New York (2011)

    Google Scholar 

  23. Manning, C., Schutze, H.: Foundations of Statistical Natural Language Processing. MIT Press (1999)

    Google Scholar 

  24. Marın, D.R.P.: Automatic evaluation of users’ short essays by using statistical and shallow natural language processing techniques. Advanced Studies Diploma Work, University of Madrid (2004)

    Google Scholar 

  25. Nicol, D.J., Macfarlane-Dick, D.: Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Stud. High. Educ. 31(2), 199–218 (2006)

    Article  Google Scholar 

  26. Noorbehbahani, F., Kardan, A.A.: The automatic assessment of free text answers using a modified bleu algorithm. Comput. Educ. 56(2), 337–345 (2011)

    Article  Google Scholar 

  27. Otter, D.W., Medina, J.R., Kalita, J.K.: A survey of the usages of deep learning for natural language processing. IEEE Trans. Neural Netw. Learn. Syst. 32(2), 604–624 (2020)

    Article  MathSciNet  Google Scholar 

  28. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pp. 311–318 (2002)

    Google Scholar 

  29. Pardo, A., Jovanovic, J., Dawson, S., Gašević, D., Mirriahi, N.: Using learning analytics to scale the provision of personalised feedback. Br. J. Edu. Technol. 50(1), 128–138 (2019)

    Article  Google Scholar 

  30. Pontual Falcão, T., et al.: A penny for your thoughts: students and instructors’ expectations about learning analytics in Brazil. In: LAK22: 12th International Learning Analytics and Knowledge Conference, pp. 186–196 (2022)

    Google Scholar 

  31. Ragupathi, K., Lee, A.: Beyond fairness and consistency in grading: the role of rubrics in higher education. In: Sanger, C.S., Gleason, N.W. (eds.) Diversity and Inclusion in Global Higher Education, pp. 73–95. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-1628-3_3

    Chapter  Google Scholar 

  32. Sahu, A., Bhowmick, P.K.: Feature engineering and ensemble-based approach for improving automatic short-answer grading performance. IEEE Trans. Learn. Technol. 13(1), 77–90 (2019)

    Article  Google Scholar 

  33. Siddiqi, R., Harrison, C.J., Siddiqi, R.: Improving teaching and learning through automated short-answer marking. IEEE Trans. Learn. Technol. 3(3), 237–249 (2010)

    Article  Google Scholar 

  34. da Silva, G.C., Rodrigues, R.L., Amorim, A.N., Mello, R.F., Neto, J.R.O.: Game learning analytics can unpack Escribo play effects in preschool early reading and writing. Comput. Educ. Open 3, 100066 (2022)

    Google Scholar 

  35. Stanger-Hall, K.F.: Multiple-choice exams: an obstacle for higher-level thinking in introductory science classes. CBE-Life Sci. Educ. 11(3), 294–306 (2012)

    Article  Google Scholar 

  36. Vairavasundaram, S., Varadharajan, V., Vairavasundaram, I., Ravi, L.: Data mining-based tag recommendation system: an overview. Wiley Interdisc. Rev.: Data Min. Knowl. Discov. 5(3), 87–112 (2015)

    Google Scholar 

  37. Wang, Y., Qin, J., Wang, W.: Efficient approximate entity matching using Jaro-Winkler distance. In: Bouguettaya, A., et al. (eds.) WISE 2017. LNCS, vol. 10569, pp. 231–239. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68783-4_16

    Chapter  Google Scholar 

  38. Wang, Z., Lan, A.S., Waters, A.E., Grimaldi, P., Baraniuk, R.G.: A meta-learning augmented bidirectional transformer model for automatic short answer grading. In: EDM (2019)

    Google Scholar 

  39. Wiliam, D.: What is assessment for learning? Stud. Educ. Eval. 37(1), 3–14 (2011)

    Article  Google Scholar 

  40. Yujian, L., Bo, L.: A normalized Levenshtein distance metric. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1091–1095 (2007)

    Article  Google Scholar 

Download references

Acknowledgment

This research was partially supported by the RNP, FACEPE (APQ-0749-1.03/21), and fellowship 310888/2021-2 of the National Council for Scientific and Technological Development - CNPq.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rafael Ferreira Mello .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mello, R.F. et al. (2022). Enhancing Instructors’ Capability to Assess Open-Response Using Natural Language Processing and Learning Analytics. In: Hilliger, I., Muñoz-Merino, P.J., De Laet, T., Ortega-Arranz, A., Farrell, T. (eds) Educating for a New Future: Making Sense of Technology-Enhanced Learning Adoption. EC-TEL 2022. Lecture Notes in Computer Science, vol 13450. Springer, Cham. https://doi.org/10.1007/978-3-031-16290-9_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16290-9_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16289-3

  • Online ISBN: 978-3-031-16290-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics