ABSTRACT
As the demand for software engineers rises, so does the demand for their education. With the increasing number of students, educators struggle to keep up. We aim to ease their burden by providing a new tool for semiautomatic source code assessment, named DEMAx. It analyzes C/C++ source codes and their test case results and with the help of machine learning, provides information on the likelihood that a submission should be manually assessed.
In this paper we present a tool with the focus on the new improvements of our previous work that include direct static analysis of non-compiling code and ranking metrics of the source codes. At the end, we present the results of the improved model on the testing data, which are solid ground for the use of our tool.
- Emil Stankov, Mile Jovanov, Aleksandar Bojchevski, and Ana Madevska Bogdanova. 2013. EMAx: Software for C++ source code analysis. J. Olympiads in Informatics 7 (2013), 123–131. https://doi.org/10.15388/ioi.2013Google Scholar
- Emil Stankov, Mile Jovanov, Ana Madevska Bogdanova, and Marjan Gusev. 2013. A new model for semiautomatic student source code assessment. J. of Computing and Information Technology (CIT) 21, 3 (2013), 185–194. https://doi.org/10.2498/cit.1002193Google ScholarCross Ref
- Tim Buckers, Clinton Cao, Michiel Doesburg, Boning Gong, Sunwei Wang, Moritz Beller, and Andy Zaidman. 2017. UAV: Warnings from multiple automated static analysis tools at a glance. In Proceedings of the 2017 IEEE 24th International Conference on Software Analysis, Evolution and Reengineering (SANER 2017). IEEE, Piscataway, NJ, 472–476. https://doi.org/10.1109/SANER.2017.7884656Google ScholarCross Ref
- Moritz Beller, Radjino Bholanath, Shane Mcintosh, and Andy Zaidman. 2016. Analyzing the state of static analysis: a large-scale evaluation in open source software. In Proceedings of the 2016 IEEE 23rd International Conference on Software Analysis, Evolution and Reengineering (SANER 2016). IEEE, Piscataway, NJ, 470–481. https://doi.org/10.1109/SANER.2016.105Google ScholarCross Ref
- Carmine Vassallo, Sebastiano Panichella, Fabio Palomba, Sebastian Proksch, Harald C. Gall, and Andy Zaidman. 2019. How developers engage with static analysis tools in different contexts. J. Empirical Software Engineering 25 (March 2020), 1419–1457. https://doi.org/10.1007/s10664-019-09750-5Google Scholar
- Tomáš Foltýnek, Norman Meuschke, and Bela Gipp. 2019. Academic plagiarism detection: A systematic literature review. J. ACM Computing Surveys 52, 6 (January 2020), 1–42. https://doi.org/10.1145/3345317Google Scholar
- Shalini Kaleeswaran, Anirudh Santhiar, Aditya Kanade, and Sumit Gulwani. 2016. Semi-supervised verified feedback generation. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE’16). ACM, New York, NY, USA, 739–750. https://doi.org/10.1145/2950290.2950363Google ScholarDigital Library
- Pasquale Ardimento, Mario L. Bernardi, and Marta Cimitile. 2020. Software analytics to support students in object-oriented programming tasks: An empirical study. J. IEEE Access 8 (July 2020), 132171–132187. https://doi.org/10.1109/ACCESS.2020.3010172Google Scholar
- Yuto Yoshizawa and Yutaka Watanobe. 2019. Logic error detection system based on structure pattern and error degree. J. Advances in Science, Technology and Engineering System 4, 5 (September 2019), 574–584. https://doi.org/10.25046/aj040501Google ScholarCross Ref
- Stephen H. Edwards, Nischel Kandru, and Mukund B. M. Rajagopal. 2017. Investigating static analysis errors in student Java programs. In Proceedings of the 2017 ACM Conference on International Computing Education Research (ICER’17). ACM, New York, NY, USA, 65–73. https://doi.org/10.1145/3105726.3106182Google Scholar
- Hsi-Min Chen, Bao-An Nguyen, Yi-Xiang Yan, and Chyi-Ren Dow. 2020. Analysis of learning behavior in an automated programming assessment environment: A code quality perspective. J. IEEE Access 8 (September 2020), 167341–167354. https://doi.org/10.1109/ACCESS.2020.3024102Google Scholar
- Lei Gao, Bo Wan, Cheng Fang, Yangyang Li, and Chen Chen. 2019. Automatic clustering of different solutions to programming assignments in computing education. In Proceedings of the ACM Conference on Global Computing Education (CompEd’19). ACM, New York, NY, USA, 164–170. https://doi.org/10.1145/3300115.3309515Google ScholarDigital Library
- David Insa, Sergio Pérez, Josep Silva, and Salvador Tamarit. 2020. Semiautomatic generation and assessment of Java exercises in engineering education. J. Computer Applications in Engineering Education 28 (October 2020), (in print). https://doi.org/10.1002/cae.22356Google Scholar
- Pedro Delgado‐Pérez and Inmaculada Medina‐Bulo. 2020. Customizable and scalable automated assessment of C/C++ programming assignments. J. Computer Applications in Engineering Education 28, 6 (November 2020), 1449–1466. https://doi.org/10.1002/cae.22317Google Scholar
- Fatima Al Shamsi and Ashraf Elnagar. 2012. An intelligent assessment tool for students’ Java submissions in introductory programming courses. J. of Intelligent Learning Systems and Applications 4, 1 (February 2012), 59–69. https://doi.org/10.4236/jilsa.2012.41006Google ScholarCross Ref
- Ádám Pintér and Sándor Szénási. 2020. Automatic analysis and evaluation of student source codes. In IEEE 20th International Symposium on Computational Intelligence and Informatics. IEEE, Piscataway, NJ, 161–166. https://doi.org/ 10.1109/CINTI51262.2020.9305819Google ScholarCross Ref
Recommendations
Detect Related Bugs from Source Code Using Bug Information
COMPSAC '10: Proceedings of the 2010 IEEE 34th Annual Computer Software and Applications ConferenceOpen source projects often maintain open bug repositories during development and maintenance, and the reporters often point out straightly or implicitly the reasons why bugs occur when they submit them. The comments about a bug are very valuable for ...
Comments