skip to main content
10.1145/3631802.3631804acmotherconferencesArticle/Chapter ViewAbstractPublication Pageskoli-callingConference Proceedingsconference-collections
research-article
Open Access

Feedback on Student Programming Assignments: Teaching Assistants vs Automated Assessment Tool

Published:06 February 2024Publication History

ABSTRACT

Existing research does not quantify and compare the differences between automated and manual assessment in the context of feedback on programming assignments. This makes it hard to reason about the effects of adopting automated assessment at the expense of manual assessment. Based on a controlled experiment involving N=117 undergraduate first-semester CS1 students, we compare the effects of having access to feedback from: i) only automated assessment, ii) only manual assessment (in the form of teaching assistants), and iii) both automated as well as manual assessment. The three conditions are compared in terms of (objective) task effectiveness and from a (subjective) student perspective.

The experiment demonstrates that having access to both forms of assessment (automated and manual) is superior both from a task effectiveness as well as a student perspective. We also find that the two forms of assessment are complementary: automated assessment appears to be better in terms of task effectiveness; whereas manual assessment appears to be better from a student perspective. Further, we found that automated assessment appears to be working better for men than women, who are significantly more inclined towards manual assessment. We then perform a cost/benefit analysis which leads to the identification of four equilibria that appropriately balance costs and benefits. Finally, this gives rise to four recommendations of when to use which kind or combination of feedback (manual and/or automated), depending on the number of students and the amount of per-student resources available. These observations provide educators with evidence-based justification for budget requests and considerations on when to (not) use automated assessment.

References

  1. [n. d.]. University Teaching Assistant Salaries by Country. https://www.salaryexpert.com/salary/browse/countries/university-teaching-assistant. Accessed: 2023-03-17.Google ScholarGoogle Scholar
  2. Aditi Agrawal and Benjamin Reed. 2022. A survey on grading format of automated grading tools for programming assignments. arXiv preprint arXiv:2212.01714 (2022).Google ScholarGoogle Scholar
  3. Kirsti M Ala-Mutka. 2005. A survey of automated assessment approaches for programming assignments. Computer science education 15, 2 (2005), 83–102.Google ScholarGoogle Scholar
  4. José Luis Fernández Alemán. 2010. Automated assessment in a programming tools course. IEEE Transactions on Education 54, 4 (2010), 576–581.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Joe Michael Allen, Frank Vahid, Kelly Downey, and Alex Daniel Edgcomb. 2018. Weekly programs in a CS1 class: Experiences with auto-graded many-small programs (MSP). In 2018 ASEE Annual Conference & Exposition.Google ScholarGoogle ScholarCross RefCross Ref
  6. Enrique Barra, Sonsoles López-Pernas, Álvaro Alonso, Juan Fernando Sánchez-Rada, Aldo Gordillo, and Juan Quemada. 2020. Automated assessment in programming courses: A case study during the COVID-19 era. Sustainability 12, 18 (2020), 7451.Google ScholarGoogle ScholarCross RefCross Ref
  7. Julio C Caiza and José María del Álamo Ramiro. 2013. Programming assignments automatic grading: review of tools and implementations. (2013).Google ScholarGoogle Scholar
  8. Ingrid Maria Christensen, Melissa Høegh Marcher, Paweł Grabarczyk, Therese Graversen, and Claus Brabrand. 2021. Computing Educational Activities Involving People Rather Than Things Appeal More to Women (Recruitment Perspective). In Proceedings of the 17th ACM Conference on International Computing Education Research (Virtual Event, USA) (ICER 2021). Association for Computing Machinery, New York, NY, USA, 127–144. https://doi.org/10.1145/3446871.3469758Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Paul E. Dickson, Toby Dragon, and Adam Lee. 2017. Using Undergraduate Teaching Assistants in Small Classes. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (Seattle, Washington, USA) (SIGCSE ’17). Association for Computing Machinery, New York, NY, USA, 165–170.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Dante D Dixson and Frank C Worrell. 2016. Formative and summative assessment in the classroom. Theory into practice 55, 2 (2016), 153–159.Google ScholarGoogle Scholar
  11. Stephen H Edwards. 2003. Using test-driven development in the classroom: Providing students with automatic, concrete feedback on performance. In Proceedings of the international conference on education and information systems: technologies and applications EISTA, Vol. 3. Citeseer.Google ScholarGoogle Scholar
  12. Emma Enström, Gunnar Kreitz, Fredrik Niemelä, Pehr Söderman, and Viggo Kann. 2011. Five years with kattis—using an automated assessment system in teaching. In 2011 Frontiers in education conference (FIE). IEEE, T3J–1.Google ScholarGoogle Scholar
  13. Peter Farrell, Alison Alborz, Andy Howes, and Diana Pearson. 2010. The impact of teaching assistants on improving pupils’ academic achievement in mainstream schools: A review of the literature. Educational review 62, 4 (2010), 435–448.Google ScholarGoogle Scholar
  14. Bent Flyvbjerg. 2006. Five misunderstandings about case-study research. Qualitative inquiry 12, 2 (2006), 219–245.Google ScholarGoogle Scholar
  15. Jeffrey Forbes, David J. Malan, Heather Pon-Barry, Stuart Reges, and Mehran Sahami. 2017. Scaling Introductory Courses Using Undergraduate Teaching Assistants. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (Seattle, Washington, USA) (SIGCSE ’17). Association for Computing Machinery, New York, NY, USA, 657–658.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Adam M Gaweda and Collin F Lynch. 2021. Student Practice Sessions Modeled as ICAP Activity Silos.International Educational Data Mining Society (2021).Google ScholarGoogle Scholar
  17. Imran Ghory. 2007. Using FizzBuzz to Find Developers who Grok Coding. https://imranontech.com/2007/01/24/using-fizzbuzz-to-find-developers-who-grok-coding/. Accessed: 2023-01-13.Google ScholarGoogle Scholar
  18. Aldo Gordillo. 2019. Effect of an instructor-centered tool for automatic assessment of programming assignments on students’ perceptions and performance. Sustainability 11, 20 (2019), 5568.Google ScholarGoogle ScholarCross RefCross Ref
  19. Pawel Grabarczyk, Alma Freiesleben, Amanda Bastrup, and Claus Brabrand. 2022. Computing Educational Programmes with More Women Are More about People & Less about Things. In Proceedings of the 27th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 1 (Dublin, Ireland) (ITiCSE ’22). Association for Computing Machinery, New York, NY, USA, 172–178. https://doi.org/10.1145/3502718.3524784Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Pawel Grabarczyk, Sebastian Mateos Nicolajsen, and Claus Brabrand. 2022. On the Effect of Onboarding Computing Students without Programming-Confidence or -Experience. In Proceedings of the 22nd Koli Calling International Conference on Computing Education Research (Koli, Finland) (Koli Calling ’22). Association for Computing Machinery, New York, NY, USA, Article 18, 8 pages. https://doi.org/10.1145/3564721.3564724Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Qiang Hao, David H Smith IV, Lu Ding, Amy Ko, Camille Ottaway, Jack Wilson, Kai H Arakawa, Alistair Turcan, Timothy Poehlman, and Tyler Greer. 2022. Towards understanding the effective design of automated formative feedback for programming assignments. Computer Science Education 32, 1 (2022), 105–127.Google ScholarGoogle ScholarCross RefCross Ref
  22. Petri Ihantola, Tuukka Ahoniemi, Ville Karavirta, and Otto Seppälä. 2010. Review of recent systems for automatic assessment of programming assignments. In Proceedings of the 10th Koli calling international conference on computing education research. 86–93.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. David Insa and Josep Silva. 2018. Automatic assessment of Java code. Computer Languages, Systems & Structures 53 (2018), 59–72. https://doi.org/10.1016/j.cl.2018.01.004Google ScholarGoogle ScholarCross RefCross Ref
  24. Code judge. 2023. Code judge. https://codejudge.io. Accessed: 2023-01-13.Google ScholarGoogle Scholar
  25. Melissa Høegh Marcher, Ingrid Maria Christensen, Paweł Grabarczyk, Therese Graversen, and Claus Brabrand. 2021. Computing Educational Activities Involving People Rather Than Things Appeal More to Women (CS1 Appeal Perspective). In Proceedings of the 17th ACM Conference on International Computing Education Research (Virtual Event, USA) (ICER 2021). Association for Computing Machinery, New York, NY, USA, 145–156. https://doi.org/10.1145/3446871.3469761Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Dragan Mirković and S Lennart Johnsson. 2003. CODELAB: A Developers’ Tool for Efficient Code Generation and Optimization. In International Conference on Computational Science. Springer, 729–738.Google ScholarGoogle Scholar
  27. Diba Mirza, Phillip T Conrad, Christian Lloyd, Ziad Matni, and Arthur Gatin. 2019. Undergraduate teaching assistants in computer science: a systematic literature review. In Proceedings of the 2019 ACM Conference on International Computing Education Research. 31–40.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Stephen Nutbrown and Colin Higgins. 2016. Static analysis of programming exercises: Fairness, usefulness and a method for application. Computer Science Education 26, 2-3 (2016), 104–128.Google ScholarGoogle ScholarCross RefCross Ref
  29. José Carlos Paiva, José Paulo Leal, and Álvaro Figueira. 2022. Automated Assessment in Computer Science Education: A State-of-the-Art Review. ACM Trans. Comput. Educ. 22, 3, Article 34 (jun 2022), 40 pages. https://doi.org/10.1145/3513140Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. José Carlos Paiva, José Paulo Leal, and Álvaro Figueira. 2022. Automated assessment in computer science education: A state-of-the-art review. ACM Transactions on Computing Education (TOCE) 22, 3 (2022), 1–40.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Laura Pappano. 2012. The Year of the MOOC. The New York Times 2, 12 (2012), 2012.Google ScholarGoogle Scholar
  32. Raymond Scott Pettit, John D Homer, Kayla Michelle McMurry, Nevan Simone, and Susan A Mengel. 2015. Are automated assessment tools helpful in programming courses?. In 2015 ASEE Annual Conference & Exposition. 26–230.Google ScholarGoogle ScholarCross RefCross Ref
  33. Vreda Pieterse and Janet Liebenberg. 2017. Automatic vs Manual Assessment of Programming Tasks. In Proceedings of the 17th Koli Calling International Conference on Computing Education Research (Koli, Finland) (Koli Calling ’17). Association for Computing Machinery, New York, NY, USA, 193–194. https://doi.org/10.1145/3141880.3141912Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Emma Riese and Viggo Kann. 2020. Teaching assistants’ experiences of tutoring and assessing in computer science education. In 2020 IEEE Frontiers in Education Conference (FIE). IEEE, 1–9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Jonathan Sharples, P Blatchford, and R Webster. 2016. Making best use of teaching assistants. (2016).Google ScholarGoogle Scholar
  36. Janet Siegmund, Norbert Siegmund, and Sven Apel. 2015. Views on internal and external validity in empirical software engineering. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Vol. 1. IEEE, 9–19.Google ScholarGoogle ScholarCross RefCross Ref
  37. SonarQube. 2023. SonarQube Documentation. https://docs.sonarqube.org/latest/. Accessed: 2023-01-13.Google ScholarGoogle Scholar
  38. Thomas Staubitz, Hauke Klement, Jan Renz, Ralf Teusner, and Christoph Meinel. 2015. Towards practical programming exercises and automated assessment in Massive Open Online Courses. In 2015 IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE). IEEE, 23–30.Google ScholarGoogle ScholarCross RefCross Ref
  39. Zahid Ullah, Adidah Lajis, Mona Jamjoom, Abdulrahman Altalhi, Abdullah Al-Ghamdi, and Farrukh Saleem. 2018. The effect of automatic assessment on novice programming: Strengths and limitations of existing systems. Computer Applications in Engineering Education 26, 6 (2018), 2328–2341.Google ScholarGoogle ScholarCross RefCross Ref
  40. Kurt VanLehn. 2011. The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational psychologist 46, 4 (2011), 197–221.Google ScholarGoogle Scholar
  41. Chris Wilcox. 2015. The role of automation in undergraduate computer science education. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education. 90–95.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Feedback on Student Programming Assignments: Teaching Assistants vs Automated Assessment Tool

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      Koli Calling '23: Proceedings of the 23rd Koli Calling International Conference on Computing Education Research
      November 2023
      361 pages
      ISBN:9798400716539
      DOI:10.1145/3631802

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 6 February 2024

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate80of182submissions,44%
    • Article Metrics

      • Downloads (Last 12 months)83
      • Downloads (Last 6 weeks)42

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format