skip to main content
10.1145/3472673.3473961acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
short-paper

A tool for evaluating computer programs from students

Published:23 August 2021Publication History

ABSTRACT

Computer science studies are more and more popular, and teachers must face and adapt to the increasing number of students. Whereas small groups allowed more interactions between teachers and students, the resulting overcrowding takes away closeness and forces teachers to spend less time with each student. Therefore, the student can quickly feel submerged and helpless against the difficulty of the course. This paper proposes a solution that aims to reduce drop-out in programming courses. It offers an accurate feedback on the quality of students' Python code to deepen their understanding, together with a playful interface to boost their interest in programming. This solution developed under the name of "METAssistant" has two objectives. It allows students to use it to evaluate their programs and to get an accurate feedback, and teachers to have an overview of the understanding of the matter by their students.

References

  1. Alain Abran, Rafa Al-Qutaish, Jean-Marc Desharnais, and Naji Habra. 2005. An Information Model for Software Quality Measurement with ISO Standards.Google ScholarGoogle Scholar
  2. Viny M Christanti and Dali S Naga. 2018. Fast and accurate spelling correction using trie and Damerau-levenshtein distance bigram. Telkomnika, 16, 2 (2018).Google ScholarGoogle Scholar
  3. Guillaume Derval, Anthony Gego, Pierre Reinbold, Benjamin Frantzen, and Peter Van Roy. 2015. Automatic grading of programming exercises in a MOOC using the INGInious platform. European Stakeholder Summit on experiences and best practices in and around MOOCs (EMOOCS’15), 86–91.Google ScholarGoogle Scholar
  4. Sebastien Deterding, Dan Dixon, Rilla Khaled, and Lennart Nacke. 2014. Du game design au gamefulness: définir la gamification. Sciences du jeu.Google ScholarGoogle Scholar
  5. Norman Fenton and James Bieman. 2014. Software Metrics: A Rigorous and Practical Approach, Third Edition (3rd ed.). CRC Press, Inc., USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. International Organization for Standardization (ISO). 2005. ISO/IEC 25000:2005, Software Engineering - Software Product Quality Requirements and Evaluation.Google ScholarGoogle Scholar
  7. Ioannis Samoladas, Georgios Gousios, Diomidis Spinellis, and Ioannis Stamelos. 2008. The SQO-OSS Quality Model: Measurement Based Open Source Software Evaluation. IFIP International Federation for Information Processing, 275, 237–248. isbn:978-0-387-09683-4Google ScholarGoogle ScholarCross RefCross Ref
  8. Eddie Antonio Santos, Joshua Charles Campbell, Dhvani Patel, Abram Hindle, and José Nelson Amaral. 2018. Syntax and Sensibility: Using language models to detect and correct syntax errors. In 25th IEEE International Conference on Software Analysis, Evolution, and Reengineering. 1–11.Google ScholarGoogle ScholarCross RefCross Ref
  9. Katie Seaborn and Deborah I. Fels. 2015. Gamification in theory and action: A survey. International Journal of Human-Computer Studies, 74 (2015), 14–31. issn:1071-5819Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A tool for evaluating computer programs from students

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        EASEAI 2021: Proceedings of the 3rd International Workshop on Education through Advanced Software Engineering and Artificial Intelligence
        August 2021
        61 pages
        ISBN:9781450386241
        DOI:10.1145/3472673

        Copyright © 2021 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 23 August 2021

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • short-paper

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader