Abstract
Feedback is important for learning, however, manual feedback provisioning is time and resource consuming. For programming education, various systems have been developed to automate recurring tasks or the whole feedback generation. As fully automated systems are not always a viable option, this paper investigates how teaching assistants can be supported in semi-automated assessment of programming assignments to give good feedback more easily. An existing semi-automated e-assessment system is extended with configurable feedback snippets as well as adaptively recommended feedback snippets based on feedback of similar submission in order to evaluate the effects on the grading and the feedback given by teaching assistants. The results indicate that such feedback snippets lead to more consistent and motivational feedback, can help finding mistakes, and have no impact on the awarded grades.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Auvinen, T.: Rubyric. In: Proceedings of Koli Calling. ACM Press (2011). https://doi.org/10.1145/2094131.2094152
Breitner, J., Hecker, M., Snelting, G.: Der Grader Praktomat. In: Automatisierte Bewertung in der Programmierausbildung, pp. 159–172. Waxmann (2017)
Butler, M., Morgan, M.: Learning challenges faced by novice programming students studying high level and low feedback concepts. In: Proceedings Ascilite Singapore, pp. 99–107 (2007)
Heller, N., Bry, F.: Human computation for learning and teaching or collaborative tracking of learners’ misconceptions. In: Intelligent Systems and Learning Data Analytics in Online Education, pp. 323–343. Academic Press (2021). https://doi.org/10.1016/B978-0-12-823410-5.00015-2
Holzinger, F.: Kollaborative Unterstützung bei der semi-automatischen Bewertung von Programmieraufgaben. Master’s thesis, LMU Munich, Germany (2021)
Keuning, H., Jeuring, J., Heeren, B.: A systematic literature review of automated feedback generation for programming exercises. TOCE 19(1), 1–43 (2018). https://doi.org/10.1145/3231711
Laß, C., et al.: Stager: simplifying the manual assessment of programming exercises. In: Proceedings of SEUH, vol. 2358, pp. 34–43. CEUR-WS (2019)
Luxton-Reilly, A., et al.: Introductory programming: a systematic literature review. In: Proceedings of ITiCSE, pp. 55–106 (2018). https://doi.org/10.1145/3293881.3295779
Moreno, R.: Decreasing cognitive load for novice students: effects of explanatory versus corrective feedback in discovery-based multimedia. Instr. Sci. 32(1), 99–113 (2004). https://doi.org/10.1023/B:TRUC.0000021811.66966.1d
Strickroth, S., Olivier, H., Pinkwart, N.: Das GATE-System: Qualitätssteigerung durch Selbsttests für Studenten bei der Onlineabgabe von Übungsaufgaben? In: Proceedings of DeLFI, pp. 115–126. Bonn, Germany (2011)
Strickroth, S., Striewe, M.: Building a corpus of task-based grading and feedback systems for learning and teaching programming. iJEP 12(5) (2022). https://doi.org/10.3991/ijep.v12i5.31283
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Strickroth, S., Holzinger, F. (2023). Supporting the Semi-automatic Feedback Provisioning on Programming Assignments. In: Temperini, M., et al. Methodologies and Intelligent Systems for Technology Enhanced Learning, 12th International Conference. MIS4TEL 2022. Lecture Notes in Networks and Systems, vol 580. Springer, Cham. https://doi.org/10.1007/978-3-031-20617-7_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-20617-7_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20616-0
Online ISBN: 978-3-031-20617-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)