skip to main content
10.1145/3548660.3561334acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
short-paper

Towards automated testing for simple programming exercises

Published:09 November 2022Publication History

ABSTRACT

Automated feedback and grading platforms can require substantial effort when encoding new programming exercises for first-year students. Such exercises are usually simple but require defining several test cases to ensure their functional correctness. This paper describes our initial effort to leverage automated test case generation for simple programming exercises. We rely on grey-box fuzzing and random combinations of method calls to test the students' solutions and compare their execution to the results produced by a reference implementation. We implemented our approach in a prototype, called SimPyTest, openly available on GitHub. We discuss its usage and possible future extensions.

References

  1. Mauricio Aniche. 2022. Effective Software Testing: A Developer’s Guide. Simon and Schuster. Google ScholarGoogle Scholar
  2. Roberto Baldoni, Emilio Coppa, Daniele Cono D’elia, Camil Demetrescu, and Irene Finocchi. 2019. A Survey of Symbolic Execution Techniques. Comput. Surveys, 51, 3 (2019), May, 1–39. https://doi.org/10.1145/3182657 Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. O. Bonaventure, Q. De Coninck, F. Duchêne, A. Gégo, M. Jadin, F. Michel, M. Piraux, C. Poncin, and O. Tilmans. 2020. Open educational resources for computer networking. ACM SIGCOMM Computer Communication Review, 50, 3 (2020), July, 38–45. https://doi.org/10.1145/3411740.3411746 Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Sébastien Combéfis and Vianney le Clément de Saint-Marcq. 2012. Teaching Programming and Algorithm Design with Pythia, a Web-Based Learning Platform. Olympiads in Informatics, 6 (2012), 31–43. Google ScholarGoogle Scholar
  5. Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. 2022. Introduction to algorithms. MIT press. Google ScholarGoogle Scholar
  6. Guillaume Derval, Anthony Gego, Pierre Reinbold, Benjamin Frantzen, and Peter Van Roy. 2015. Automatic grading of programming exercises in a MOOC using the INGInious platform. In EMOOCS’15. Mons, Belgium. 86–91. Google ScholarGoogle Scholar
  7. Guillaume Derval, Anthony Gégo, Ludovic Taffin, and al. 2022. INGInious. https://github.com/UCL-INGI/INGInious Google ScholarGoogle Scholar
  8. Piotr Duch and Tomasz Jaworski. 2018. Dante - Automated Assessments Tool for Students’ Programming Assignments. In HSI’18. IEEE, Gdansk, Poland. 162–168. https://doi.org/10.1109/HSI.2018.8431146 Google ScholarGoogle ScholarCross RefCross Ref
  9. Nobuo Funabiki, Ryota Kusaka, Nobuya Ishihara, and Wen-Chung Kao. 2017. A Proposal of Test Code Generation Tool for Java Programming Learning Assistant System. In AINA’17. IEEE, Taiwan. 51–56. https://doi.org/10.1109/AINA.2017.60 Google ScholarGoogle ScholarCross RefCross Ref
  10. Stephan Lukasczyk, Florian Kroiß, and Gordon Fraser. 2020. Automated Unit Test Generation for Python. In SSBSE’20. Springer International Publishing, 9–24. https://doi.org/10.1007/978-3-030-59762-7_2 Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Ruan Reis, Gustavo Soares, Melina Mongiovi, and Wilkerson L. Andrade. 2019. Evaluating Feedback Tools in Introductory Programming Classes. In FIE’19). IEEE, Covington, KY, USA. 1–7. https://doi.org/10.1109/FIE43999.2019.9028418 Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Felipe Restrepo-Calle, Jhon Jairo Ramírez-Echeverry, and Fabio A. Gonzalez. 2018. Uncode: Interactive System for Learning and Automatic Evaluation of Computer Programming Skills. In EduLearn’18. Palma, Spain. 6888–6898. https://doi.org/10.21125/edulearn.2018.1632 Google ScholarGoogle ScholarCross RefCross Ref
  13. Thomas Staubitz, Ralf Teusner, and Christoph Meinel. 2017. Towards a repository for open auto-gradable programming exercises. In TALE’17. IEEE, Hong Kong. 66–73. https://doi.org/10.1109/TALE.2017.8252306 Google ScholarGoogle ScholarCross RefCross Ref
  14. Andreas Zeller, Rahul Gopinath, Marcel Böhme, Gordon Fraser, and Christian Holler. 2021. The Fuzzing Book. CISPA Helmholtz Center for Information Security. https://www.fuzzingbook.org/ Retrieved 2021-10-26. Google ScholarGoogle Scholar

Index Terms

  1. Towards automated testing for simple programming exercises

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      EASEAI 2022: Proceedings of the 4th International Workshop on Education through Advanced Software Engineering and Artificial Intelligence
      November 2022
      44 pages
      ISBN:9781450394536
      DOI:10.1145/3548660

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 9 November 2022

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • short-paper

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader