Skip to main content

Exploring the Connections Between the Use of an Automated Feedback System and Learning Behavior in a MOOC for Programming

  • Conference paper
  • First Online:
  • 2107 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13450))

Abstract

Automated Testing and Feedback (ATF) systems are widely applied in programming courses, providing learners with immediate feedback and facilitating hands-on practice. When it comes to Massive Open Online Courses (MOOCs), where students often struggle and instructors’ assistance is scarce, ATF appears to be particularly essential. However, the impact of ATF on learning in MOOCs for programming is understudied. This study explores the connections between ATF usage and learning behavior, addressing relevant measures of learning in MOOCs. We extracted data of learners’ engagement with the course material, code-submissions and self-reported questionnaire in a Python programming MOOC with an ATF system embedded, to compile an overall and unique picture of learning behavior. Learners’ response to feedback was determined by sequence analysis of code submission, identifying improved or feedback-ignored re-submissions. Clusters of learners with common learning behaviors were identified, and their response to feedback was compared. We believe that our findings, as well as the holistic approach we propose to investigate ATF impact, will contribute to research in this field and to effective integration of ATF systems to maximize learning experience in MOOCs for programming.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Alario-Hoyos, C., Estévez-Ayres, I., Pérez-Sanagustín, M., Kloos, C.D., Fernández-Panadero, C.: Understanding learners’ motivation and learning strategies in MOOCs. Int. Rev. Res. Open Distrib. Learn. 18(3), 119–137 (2017). https://doi.org/10.19173/IRRODL.V18I3.2996

  2. Anderson, A., Huttenlocher, D., Kleinberg, J., Leskovec, J.: Engaging with massive online courses. In: WWW 2014 - Proceedings of the 23rd International Conference on World Wide Web, pp. 687–697 (2014). https://doi.org/10.1145/2566486.2568042

  3. Benotti, L., Aloi, F., Bulgarelli, F., Gomez, M.J.: The effect of a web-based coding tool with automatic feedback on students’ performance and perceptions. In: SIGCSE 2018 - Proceedings of the 49th ACM Technical Symposium on Computer Science Education, pp. 2–7 (2018). https://doi.org/10.1145/3159450.3159579

  4. Cai, Y.-Z., Tsai, M.-H.: Improving programming education quality with automatic grading system. In: Rønningsbakk, L., Wu, T.-T., Sandnes, F.E., Huang, Y.-M. (eds.) ICITL 2019. LNCS, vol. 11937, pp. 207–215. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-35343-8_22

    Chapter  Google Scholar 

  5. Carreira-Perpinán, M.: A review of dimension reduction techniques. Department of Computer Science. University of Sheffield. Technical report CS-96-09, pp. 1–69 (1997)

    Google Scholar 

  6. Cavalcanti, A.P., Barbosa, A., Carvalho, R., et al.: Automatic feedback in online learning environments: a systematic literature review. Comput. Educ.: Artif. Intell. 2, 100027 (2021). https://doi.org/10.1016/J.CAEAI.2021.100027

    Article  Google Scholar 

  7. Chan, M.M., De La Roca, M., Alario-Hoyos, C., Plata, R.B., Medina, J.A., Rizzardini, R.H.: MOOCMaker-2017 perceived usefulness and motivation students towards the use of a cloud-based tool to support the learning process in a Java MOOC. In: International Conference MOOC-MAKER, pp. 73–82 (2017)

    Google Scholar 

  8. Chan, Y., Walmsley, R.P.: Learning and understanding the Kruskal-Wallis one-way analysis-of- variance-by-ranks test for differences among three or more independent groups. Phys. Ther. 77(12), 1755–1762 (1997). https://doi.org/10.1093/ptj/77.12.1755

    Article  Google Scholar 

  9. Combéfis, S.: Automated code assessment for education: review, classification and perspectives on techniques and tools. Software 1, 3–30 (2022). https://doi.org/10.3390/software1010002

  10. Denny, P., Luxton-Reilly, A., Carpenter, D.: Enhancing syntax error messages appears ineffectual. In: The 2014 Conference on Innovation & Technology in Computer Science Education, pp. 273–278 (2014). https://doi.org/10.1145/2591708.2591748

  11. Derval, G., Gego, A., Reinbold, P., Benjamin, F., Van Roy, P.: Automatic grading of programming exercises in a MOOC using the INGInious platform. In: European Stakeholder Summit on experiences and best practices in and around MOOCs (EMOOCS 2015)‏, pp. 86–91 (2015)

    Google Scholar 

  12. Evans, B.J., Baker, R.B., Dee, T.S.: Persistence patterns in massive open online courses (MOOCs) 87, 2, 206–242 (2016). http://dx.doi.org/10.1080/00221546.2016.11777400, https://doi.org/10.1080/00221546.2016.11777400

  13. Feklistova, L., Luik, P., Lepp, M.: Clusters of programming exercises difficulties resolvers in a MOOC. In: Proceedings of the European Conference on e-Learning, ECEL, vol. 2020-Octob, pp. 563–569 (2020). https://doi.org/10.34190/EEL.20.125

  14. Gallego-Romero, J.M., Alario-Hoyos, C., Estévez-Ayres, I., Delgado Kloos, C.: Analyzing learners’ engagement and behavior in MOOCs on programming with the Codeboard IDE. Educ. Tech. Res. Dev. 68(5), 2505–2528 (2020). https://doi.org/10.1007/s11423-020-09773-6

    Article  Google Scholar 

  15. Gordillo, A.: Effect of an instructor-centered tool for automatic assessment of programming assignments on students’ perceptions and performance. Sustainability 11(20), 5568 (2019). https://doi.org/10.3390/su11205568

    Article  Google Scholar 

  16. Gusukuma, L., Bart, A.C., Kafura, D., Ernst, J.: Misconception-driven feedback: results from an experimental study. In: ICER 2018 - Proceedings of the 2018 ACM Conference on International Computing Education Research, pp. 160–168 Association for Computing Machinery, Inc., New York (2018). https://doi.org/10.1145/3230977.3231002

  17. Hao, Q., Wilson, J.P., Ottaway, C., Iriumi, N., Arakawa, K., Smith, D.H.: Investigating the essential of meaningful automated formative feedback for programming assignments. In: Proceedings of IEEE Symposium on Visual Languages and Human-Centric Computing, VL/HCC, pp. 151–155. IEEE Computer Society (2019). https://doi.org/10.1109/VLHCC.2019.8818922

  18. Hew, K.F.: Promoting engagement in online courses: what strategies can we learn from three highly rated MOOCS. Br. J. Edu. Technol. 47(2), 320–341 (2016). https://doi.org/10.1111/bjet.12235

    Article  Google Scholar 

  19. INGInious [software] (2014). https://github.com/UCL-INGI/INGInious

  20. Jung, Y., Lee, J.: Learning engagement and persistence in massive open online courses (MOOCS). Comput. Educ. 122, 9–22 (2018). https://doi.org/10.1016/j.compedu.2018.02.013

    Article  Google Scholar 

  21. Kahan, T., Soffer, T., Nachmias, R.: Types of participant behavior in a massive open online course‏. IRRODL 18(6), 1–18 (2017). https://doi.org/10.19173/irrodl.v18i6.3087

  22. Keuning, H., Jeuring, J., Heeren, B.: A systematic literature review of automated feedback generation for programming exercises. ACM Trans. Comput. Educ. 19(1), 1–43 (2018). https://doi.org/10.1145/3231711

    Article  Google Scholar 

  23. Kizilcec, R.F., Piech, C., Schneider, E.: Deconstructing disengagement: analyzing learner subpopulations in massive open online courses. In: ACM International Conference Proceeding Series, pp. 170–179 (2013). https://doi.org/10.1145/2460296.2460330

  24. Krugel, J., Hubwieser, P., Goedicke, M., et al.: Automated measurement of competencies and generation of feedback in object-oriented programming courses. In: 2020 IEEE Global Engineering Education Conference (EDUCON), pp. 329–336. IEEE (2020)

    Google Scholar 

  25. Krusche, S., Seitz, A.: Increasing the interactivity in software engineering MOOCs-a case study. In: Proceedings of the 52nd Hawaii International Conference on System Sciences, pp. 7592–7601 (2019)

    Google Scholar 

  26. Luik, P., et al.: Participants and completers in programming MOOCs. Educ. Inf. Technol. 24(6), 3689–3706 (2019). https://doi.org/10.1007/s10639-019-09954-8

    Article  Google Scholar 

  27. Marin, V.J., Pereira, T., Sridharan, S., Rivero, C.R.: Automated personalized feedback in introductory Java programming MOOCs. In: Proceedings - International Conference on Data Engineering, pp. 1259–1270 (2017). https://doi.org/10.1109/ICDE.2017.169

  28. McBroom, J., Yacef, K., Koprinska, I., Curran, J.R.: A data-driven method for helping teachers improve feedback in computer programming automated tutors. In: Penstein Rosé, C., et al. (eds.) AIED 2018. LNCS (LNAI), vol. 10947, pp. 324–337. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93843-1_24

    Chapter  Google Scholar 

  29. Narciss, S.: Feedback strategies for interactive learning tasks. In: Spector, J.M., Merrill, M.D., Van Merriënboer, J., Driscoll, M.P. (eds.) Handbook of Research on Educational Communications and Technology, pp. 125–144. Lawrence Erlbaum Associates, Mahaw, New York (2008)

    Google Scholar 

  30. Pettit, R., Prather, J.: Automated assessment tools: too many cooks, not enough collaboration. J. Comput. Sci. Coll. 32(4), 113–121 (2017)

    Google Scholar 

  31. Pieterse, V.: Automated assessment of programming assignments. In: 3rd Computer Science Education Research Conference on Computer Science Education Research, vol. 3, pp. 45–56 (2013). http://dx.doi.org/10.1145/1559755.1559763

  32. Qian, Y., Lehman, J.: Using targeted feedback to address common student misconceptions in introductory programming: a data-driven approach. SAGE Open 9, 4 (2019). https://doi.org/10.1177/2158244019885136

    Article  Google Scholar 

  33. Rafique, W., Dou, W., Hussain, K., Ahmed, K.: Factors influencing programming expertise in a web-based e-learning paradigm. Online Learn. J. 24(1), 162–181 (2020). https://doi.org/10.24059/olj.v24i1.1956

  34. Restrepo-Calle, F., Ramírez Echeverry, J.J., González, F.A.: Continuous assessment in a computer programming course supported by a software tool. Comput. Appl. Eng. Educ. 27(1), 80–89 (2019). https://doi.org/10.1002/cae.22058

    Article  Google Scholar 

  35. Shute, V.J.: Focus on formative feedback. Rev. Educ. Res. 78(1), 153–189 (2008). https://doi.org/10.3102/0034654307313795

    Article  Google Scholar 

  36. Staubitz, T., Klement, H., Renz, J., Teusner, R., Meinel, C.: Towards practical programming exercises and automated assessment in massive open online courses. In: Proceedings of 2015 IEEE International Conference on Teaching, Assessment and Learning for Engineering, TALE 2015, pp. 23–30 IEEE (2015). https://doi.org/10.1109/TALE.2015.7386010

  37. Vinker, E., Rubinstein, A.: Mining code submissions to elucidate disengagement in a computer science MOOC. In: LAK22: 12th International Learning Analytics and Knowledge Conference (LAK22), pp. 142–151 (2022). https://doi.org/10.1145/3506860.3506877

  38. Wong, J., Baars, M., Davis, D., Van Der Zee, T., Houben, G.J., Paas, F.: Supporting self-regulated learning in online learning environments and MOOCs: a systematic review. Int. J. Hum.-Comput. Interact. 35(4–5), 356–373 (2019). https://doi.org/10.1080/10447318.2018.1543084

    Article  Google Scholar 

  39. Yuan, C., Yang, H.: Research on K-value selection method of K-means clustering algorithm. J 2(2), 226–235 (2019). https://doi.org/10.3390/J2020016

  40. Zar, J.H.: Biostatistical Analysis. Prentice Hall, New York (1999)

    Google Scholar 

Download references

Acknowledgement

Our thanks to the Azrieli foundation for the award of a generous Azrieli Fellowship, which allowed this research. We thank the anonymous reviewers for their constructive comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hagit Gabbay .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gabbay, H., Cohen, A. (2022). Exploring the Connections Between the Use of an Automated Feedback System and Learning Behavior in a MOOC for Programming. In: Hilliger, I., Muñoz-Merino, P.J., De Laet, T., Ortega-Arranz, A., Farrell, T. (eds) Educating for a New Future: Making Sense of Technology-Enhanced Learning Adoption. EC-TEL 2022. Lecture Notes in Computer Science, vol 13450. Springer, Cham. https://doi.org/10.1007/978-3-031-16290-9_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16290-9_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16289-3

  • Online ISBN: 978-3-031-16290-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics