skip to main content
10.1145/3470716.3470728acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicieiConference Proceedingsconference-collections
research-article

DEMAx Tool Based on an Improved Model for Semiautomatic C/C++ Source Code Assessment

Authors Info & Claims
Published:14 October 2021Publication History

ABSTRACT

As the demand for software engineers rises, so does the demand for their education. With the increasing number of students, educators struggle to keep up. We aim to ease their burden by providing a new tool for semiautomatic source code assessment, named DEMAx. It analyzes C/C++ source codes and their test case results and with the help of machine learning, provides information on the likelihood that a submission should be manually assessed.

In this paper we present a tool with the focus on the new improvements of our previous work that include direct static analysis of non-compiling code and ranking metrics of the source codes. At the end, we present the results of the improved model on the testing data, which are solid ground for the use of our tool.

References

  1. Emil Stankov, Mile Jovanov, Aleksandar Bojchevski, and Ana Madevska Bogdanova. 2013. EMAx: Software for C++ source code analysis. J. Olympiads in Informatics 7 (2013), 123–131. https://doi.org/10.15388/ioi.2013Google ScholarGoogle Scholar
  2. Emil Stankov, Mile Jovanov, Ana Madevska Bogdanova, and Marjan Gusev. 2013. A new model for semiautomatic student source code assessment. J. of Computing and Information Technology (CIT) 21, 3 (2013), 185–194. https://doi.org/10.2498/cit.1002193Google ScholarGoogle ScholarCross RefCross Ref
  3. Tim Buckers, Clinton Cao, Michiel Doesburg, Boning Gong, Sunwei Wang, Moritz Beller, and Andy Zaidman. 2017. UAV: Warnings from multiple automated static analysis tools at a glance. In Proceedings of the 2017 IEEE 24th International Conference on Software Analysis, Evolution and Reengineering (SANER 2017). IEEE, Piscataway, NJ, 472–476. https://doi.org/10.1109/SANER.2017.7884656Google ScholarGoogle ScholarCross RefCross Ref
  4. Moritz Beller, Radjino Bholanath, Shane Mcintosh, and Andy Zaidman. 2016. Analyzing the state of static analysis: a large-scale evaluation in open source software. In Proceedings of the 2016 IEEE 23rd International Conference on Software Analysis, Evolution and Reengineering (SANER 2016). IEEE, Piscataway, NJ, 470–481. https://doi.org/10.1109/SANER.2016.105Google ScholarGoogle ScholarCross RefCross Ref
  5. Carmine Vassallo, Sebastiano Panichella, Fabio Palomba, Sebastian Proksch, Harald C. Gall, and Andy Zaidman. 2019. How developers engage with static analysis tools in different contexts. J. Empirical Software Engineering 25 (March 2020), 1419–1457. https://doi.org/10.1007/s10664-019-09750-5Google ScholarGoogle Scholar
  6. Tomáš Foltýnek, Norman Meuschke, and Bela Gipp. 2019. Academic plagiarism detection: A systematic literature review. J. ACM Computing Surveys 52, 6 (January 2020), 1–42. https://doi.org/10.1145/3345317Google ScholarGoogle Scholar
  7. Shalini Kaleeswaran, Anirudh Santhiar, Aditya Kanade, and Sumit Gulwani. 2016. Semi-supervised verified feedback generation. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE’16). ACM, New York, NY, USA, 739–750. https://doi.org/10.1145/2950290.2950363Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Pasquale Ardimento, Mario L. Bernardi, and Marta Cimitile. 2020. Software analytics to support students in object-oriented programming tasks: An empirical study. J. IEEE Access 8 (July 2020), 132171–132187. https://doi.org/10.1109/ACCESS.2020.3010172Google ScholarGoogle Scholar
  9. Yuto Yoshizawa and Yutaka Watanobe. 2019. Logic error detection system based on structure pattern and error degree. J. Advances in Science, Technology and Engineering System 4, 5 (September 2019), 574–584. https://doi.org/10.25046/aj040501Google ScholarGoogle ScholarCross RefCross Ref
  10. Stephen H. Edwards, Nischel Kandru, and Mukund B. M. Rajagopal. 2017. Investigating static analysis errors in student Java programs. In Proceedings of the 2017 ACM Conference on International Computing Education Research (ICER’17). ACM, New York, NY, USA, 65–73. https://doi.org/10.1145/3105726.3106182Google ScholarGoogle Scholar
  11. Hsi-Min Chen, Bao-An Nguyen, Yi-Xiang Yan, and Chyi-Ren Dow. 2020. Analysis of learning behavior in an automated programming assessment environment: A code quality perspective. J. IEEE Access 8 (September 2020), 167341–167354. https://doi.org/10.1109/ACCESS.2020.3024102Google ScholarGoogle Scholar
  12. Lei Gao, Bo Wan, Cheng Fang, Yangyang Li, and Chen Chen. 2019. Automatic clustering of different solutions to programming assignments in computing education. In Proceedings of the ACM Conference on Global Computing Education (CompEd’19). ACM, New York, NY, USA, 164–170. https://doi.org/10.1145/3300115.3309515Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. David Insa, Sergio Pérez, Josep Silva, and Salvador Tamarit. 2020. Semiautomatic generation and assessment of Java exercises in engineering education. J. Computer Applications in Engineering Education 28 (October 2020), (in print). https://doi.org/10.1002/cae.22356Google ScholarGoogle Scholar
  14. Pedro Delgado‐Pérez and Inmaculada Medina‐Bulo. 2020. Customizable and scalable automated assessment of C/C++ programming assignments. J. Computer Applications in Engineering Education 28, 6 (November 2020), 1449–1466. https://doi.org/10.1002/cae.22317Google ScholarGoogle Scholar
  15. Fatima Al Shamsi and Ashraf Elnagar. 2012. An intelligent assessment tool for students’ Java submissions in introductory programming courses. J. of Intelligent Learning Systems and Applications 4, 1 (February 2012), 59–69. https://doi.org/10.4236/jilsa.2012.41006Google ScholarGoogle ScholarCross RefCross Ref
  16. Ádám Pintér and Sándor Szénási. 2020. Automatic analysis and evaluation of student source codes. In IEEE 20th International Symposium on Computational Intelligence and Informatics. IEEE, Piscataway, NJ, 161–166. https://doi.org/ 10.1109/CINTI51262.2020.9305819Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICIEI '21: Proceedings of the 6th International Conference on Information and Education Innovations
    April 2021
    145 pages
    ISBN:9781450389488
    DOI:10.1145/3470716

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 14 October 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)11
    • Downloads (Last 6 weeks)5

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format