skip to main content
10.1145/3545945.3569726acmconferencesArticle/Chapter ViewAbstractPublication PagessigcseConference Proceedingsconference-collections
research-article

Studying the Impact of Auto-Graders Giving Immediate Feedback in Programming Assignments

Published:03 March 2023Publication History

ABSTRACT

Immediate feedback from auto-graders positively impacts students' grades and self-efficacy in introductory programming courses. However, recent research has observed that students are not likely to develop testing skills since they over-rely on the feedback from the auto-grader. Therefore, in this paper, we designed and conducted an empirical investigation to study the impact of using immediate feedback on students' ability to write correct programs and test them. The results indicate that while students use immediate feedback from an auto-grader, it does not dissuade them from attaining independent testing skills. Moreover, the feedback helps students, especially underrepresented groups (e.g., women), learn more effectively and gain confidence.

References

  1. Kirsti M Ala-Mutka. 2005. A survey of automated assessment approaches for programming assignments. Computer science education, Vol. 15, 2 (2005), 83--102.Google ScholarGoogle Scholar
  2. Elisa Baniassad, Lucas Zamprogno, Braxton Hall, and Reid Holmes. 2021. Stop the (autograder) insanity: Regression penalties to deter autograder overreliance. In Proceedings of the 52nd ACM technical symposium on computer science education. 1062--1068.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Luciana Benotti, Federico Aloi, Franco Bulgarelli, and Marcos J Gomez. 2018. The effect of a web-based coding tool with automatic feedback on students' performance and perceptions. In Proceedings of the 49th ACM Technical Symposium on Computer Science Education. 2--7.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Kevin Buffardi and Stephen H. Edwards. 2015. Reconsidering Automated Feedback: A Test-Driven Approach. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education (Kansas City, Missouri, USA) (SIGCSE '15). Association for Computing Machinery, New York, NY, USA, 416--420. https://doi.org/10.1145/2676723.2677313Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Yoonsik Cheon and Gary T Leavens. 2002. A simple and practical approach to unit testing: The JML and JUnit way. In European Conference on Object-Oriented Programming. Springer, 231--255.Google ScholarGoogle ScholarCross RefCross Ref
  6. Lucas Cordova, Jeffrey Carver, Noah Gershmel, and Gursimran Walia. 2021. A Comparison of Inquiry-Based Conceptual Feedback vs. Traditional Detailed Feedback Mechanisms in Software Testing Education: An Empirical Investigation. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (Virtual Event, USA) (SIGCSE '21). Association for Computing Machinery, New York, NY, USA, 87--93. https://doi.org/10.1145/3408877.3432417Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Christopher Douce, David Livingstone, and James Orwell. 2005. Automatic test-based assessment of programming: A review. Journal on Educational Resources in Computing (JERIC), Vol. 5, 3 (2005), 4-es.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Stephen H. Edwards. 2008. Web-CAT. https://web-cat.cs.vt.edu. Accessed: 17-Jun-2022.Google ScholarGoogle Scholar
  9. Stephen H. Edwards and Manuel A. Perez-Quinones. 2008. Web-CAT: Automatically Grading Programming Assignments. SIGCSE Bull., Vol. 40, 3 (jun 2008), 328. https://doi.org/10.1145/1597849.1384371Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. GitHub. 2021. GitHub Classroom. https://classroom.github.com/. Accessed: 17-Jun-2022.Google ScholarGoogle Scholar
  11. Spacco J. 2013. Marmoset. https://marmoset.cs.umd.edu/. Accessed: 17-Jun-2022.Google ScholarGoogle Scholar
  12. David Jackson and Michelle Usher. 1997. Grading Student Programs Using ASSYST. In Proceedings of the Twenty-Eighth SIGCSE Technical Symposium on Computer Science Education (San Jose, California, USA) (SIGCSE '97). Association for Computing Machinery, New York, NY, USA, 335--339. https://doi.org/10.1145/268084.268210Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Yue Jia and Mark Harman. 2010. An analysis and survey of the development of mutation testing. IEEE transactions on software engineering, Vol. 37, 5 (2010), 649--678.Google ScholarGoogle Scholar
  14. Hieke Keuning, Johan Jeuring, and Bastiaan Heeren. 2018. A systematic literature review of automated feedback generation for programming exercises. ACM Transactions on Computing Education (TOCE), Vol. 19, 1 (2018), 1--43.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Zachary Kurmas. 2017. MIPSUnit: A Unit Testing Framework for MIPS Assembly. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (Seattle, Washington, USA) (SIGCSE '17). Association for Computing Machinery, New York, NY, USA, 351--355. https://doi.org/10.1145/3017680.3017747Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Allison Scott, Alexis Martin, Frieda McAlear, and Sonia Koshy. 2017. Broadening participation in computing: examining experiences of girls of color. In Proceedings of the 2017 ACM conference on innovation and technology in computer science education. 252--256.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Roli Varma. 2006. Making computer science minority-friendly. Commun. ACM, Vol. 49, 2 (2006), 129--134.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Kenneth Vollmar and Pete Sanderson. 2006. MARS: An Education-Oriented MIPS Assembly Language Simulator. In Proceedings of the 37th SIGCSE Technical Symposium on Computer Science Education (Houston, Texas, USA) (SIGCSE '06). Association for Computing Machinery, New York, NY, USA, 239--243. https://doi.org/10.1145/1121341.1121415Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Studying the Impact of Auto-Graders Giving Immediate Feedback in Programming Assignments

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SIGCSE 2023: Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1
      March 2023
      1481 pages
      ISBN:9781450394314
      DOI:10.1145/3545945

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 3 March 2023

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate1,595of4,542submissions,35%

      Upcoming Conference

      SIGCSE Virtual 2024
    • Article Metrics

      • Downloads (Last 12 months)173
      • Downloads (Last 6 weeks)21

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader