Abstract
According to our data, about 15% of programming students trick if they are aware that only a “dumb” robot evaluates their programming assignments unattended by programming experts. Especially in large-scale formats like MOOCs, this might become a question because to trick current automated assignment assessment systems (APAAS) is astonishingly easy and the question arises whether unattended grading components grade the capability to program or to trick. This study analyzed what kind of tricks students apply beyond the well-known “copy-paste” code plagiarism to derive possible mitigation options. Therefore, this study analyzed student cheat patterns that occurred in two programming courses and developed a unit testing framework JEdUnit as a solution proposal that intentionally targets such tricky educational aspects of programming. The validation phase validated JEdUnit in another programming course. This study identified and analyzed four recurring cheat patterns (overfitting, evasion, redirection, and injection) that hardly occur in “normal” software development and are not aware to normal unit testing frameworks that are frequently used to test the correctness of student submissions. Therefore, the concept of well-known unit testing frameworks was extended by adding three “countermeasures”: randomization, code inspection, separation. The validation showed that JEdUnit detected these patterns and in consequence, reduced cheating entirely to zero. From a students perspective, JEdUnit makes the grading component more intelligent, and cheating does not pay-off anymore. This Chapter explains the cheat patterns and what features of JEdUnit mitigate them by a continuous example.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
The reflection API would enable to formulate arbitrary calling indirections that could be not identified by code inspections.
- 3.
References
Ala-Mutka, K.M.: A survey of automated assessment approaches for programming assignments. Comput. Sci. Educ. 15(2), 83–102 (2005). https://doi.org/10.1080/08993400500150747
Alraimi, K.M., Zo, H., Ciganek, A.P.: Understanding the MOOCs continuance: the role of openness and reputation. Comput. Educ. 80, 28–38 (2015). https://doi.org/10.1016/j.compedu.2014.08.006, http://www.sciencedirect.com/science/article/pii/S0360131514001791
Caiza, J.C., Alamo Ramiro, J.M.d.: Automatic grading: review of tools and implementations. In: Proceedings of 7th International Technology, Education and Development Conference (INTED2013) (2013)
Campbell, D.T., Stanley, J.C.: Experimental and Quasi-experimental Designs for Research. Houghton Mifflin Company, Boston (2003). Reprint
Douce, C., Livingstone, D., Orwell, J.: Automatic test-based assessment of programming: a review. J. Educ. Resour. Comput. 5(3) (2005). https://doi.org/10.1145/1163405.1163409, http://doi.acm.org/10.1145/1163405.1163409
Gupta, S., Gupta, B.B.: Cross-site scripting (xss) attacks and defense mechanisms: classification and state-of-the-art. Int. J. Syst. Assurance Eng. Manage. 8(1), 512–530 (2017). https://doi.org/10.1007/s13198-015-0376-0
Halfond, W.G.J., Orso, A.: Amnesia: analysis and monitoring for neutralizing SQL-injection attacks. In: Proceedings of the 20th IEEE/ACM International Conference on Automated Software Engineering, ASE 2005, pp. 174–183. ACM, New York (2005). https://doi.org/10.1145/1101908.1101935, http://doi.acm.org/10.1145/1101908.1101935
Hunter, J.D.: Matplotlib: a 2D graphics environment. Comput. Sci. Eng. 9(3), 90–95 (2007). https://doi.org/10.1109/MCSE.2007.55
Ihantola, P., Ahoniemi, T., Karavirta, V., Seppälä, O.: Review of recent systems for automatic assessment of programming assignments. In: Proceedings of the 10th Koli Calling International Conference on Computing Education Research, Koli Calling 2010, pp. 86–93. ACM, New York (2010). https://doi.org/10.1145/1930464.1930480, http://doi.acm.org/10.1145/1930464.1930480
Kluyver, T., et al.: Jupyter notebooks - a publishing format for reproducible computational workflows. In: Loizides, F., Schmidt, B. (eds.) Positioning and Power in Academic Publishing: Players, Agents and Agendas, pp. 87–90. IOS Press (2016)
McLaren, B.M., Reilly, R., Zvacek, S., Uhomoibhi, J. (eds.): CSEDU 2018. CCIS, vol. 1022. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-21151-6
Oliphant, T.: A Guide to NumPy. Trelgol Publishing (2006)
del Pino, J.C.R., Rubio-Royo, E., Hernández-Figueroa, Z.J.: A virtual programming lab for moodle with automatic assessment and anti-plagiarism features. In: Proceedings of the 2012 International Conference on e-Learning, e-Business, Enterprise Information Systems, and e-Government (2012)
Pomerol, J.C., Epelboin, Y., Thoury, C.: What is a MOOC?, Chapter 1, pp. 1–17. Wiley-Blackwell (2015). https://doi.org/10.1002/9781119081364.ch1, https://onlinelibrary.wiley.com/doi/abs/10.1002/9781119081364.ch1
Ray, D., Ligatti, J.: Defining code-injection attacks. SIGPLAN Not. 47(1), 179–190 (2012). https://doi.org/10.1145/2103621.2103678, http://doi.acm.org/10.1145/2103621.2103678
Rodríguez, J., Rubio-Royo, E., Hernández, Z.: Fighting plagiarism: metrics and methods to measure and find similarities among source code of computer programs in VPL. In: EDULEARN11 Proceedings, 3rd International Conference on Education and New Learning Technologies, IATED, pp. 4339–4346, 4–6 July 2011
Romli, R., Mahzan, N., Mahmod, M., Omar, M.: Test data generation approaches for structural testing and automatic programming assessment: a systematic literature review. Adv. Sci. Let. 23(5), 3984–3989 (2017). https://doi.org/10.1166/asl.2017.8294
Smith, N., van Bruggen, D., Tomassetti, F.: JavaParser: Visited. Leanpub (2018)
Staubitz, T., Klement, H., Renz, J., Teusner, R., Meinel, C.: Towards practical programming exercises and automated assessment in massive open online courses. In: 2015 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), pp. 23–30, December 2015. https://doi.org/10.1109/TALE.2015.7386010
Su, Z., Wassermann, G.: The essence of command injection attacks in web applications. SIGPLAN Not. 41(1), 372–382 (2006). https://doi.org/10.1145/1111320.1111070, http://doi.acm.org/10.1145/1111320.1111070
Thiébaut, D.: Automatic evaluation of computer programs using moodle’s virtual programming lab (vpl) plug-in. J. Comput. Sci. Coll. 30(6), 145–151 (Jun 2015), http://dl.acm.org/citation.cfm?id=2753024.2753053
Thompson, K.: Programming techniques: regular expression search algorithm. Commun. ACM 11(6), 419–422 (1968)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Kratzke, N. (2020). How Programming Students Trick and What JEdUnit Can Do Against It. In: Lane, H.C., Zvacek, S., Uhomoibhi, J. (eds) Computer Supported Education. CSEDU 2019. Communications in Computer and Information Science, vol 1220. Springer, Cham. https://doi.org/10.1007/978-3-030-58459-7_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-58459-7_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58458-0
Online ISBN: 978-3-030-58459-7
eBook Packages: Computer ScienceComputer Science (R0)