skip to main content
10.1145/3502718.3524794acmconferencesArticle/Chapter ViewAbstractPublication PagesiticseConference Proceedingsconference-collections
research-article
Public Access

An Empirical Analysis of Code-Tracing Concepts

Published:07 July 2022Publication History

ABSTRACT

Which code-tracing concepts are introductory programming students likely to learn from classroom instruction and which ones need additional problem-solving practice to master? Are there relationships among programming concepts that can be used to build adaptive assessment instruments? To answer these questions, we analyzed the data collected over several semesters by a suite of code-tracing tutors called problets, that administered pre-test, practice, post-test protocol. Each tutor covered a single programming topic, which consisted of 9-25 concepts. For each concept, we used the pretest data to calculate the probability that students knew the concept before using the tutor. Using a weighted average of the concept probabilities, we found that students had learned some topics more than others: if/if-else (0.85), function behavior (0.76), arrays (0.73), while (0.7), for (0.69), switch (0.67), and debugging functions (0.55). Some of the concepts on which students needed additional practice included bugs, nested loops and back-to-back loops. Expressions, even when used in novel contexts, were not challenging for students. We built a Bayesian network for each topic based on conditional probabilities to discover the concepts that must be covered, and those whose coverage is redundant in the presence of other concepts. A strength of this empirical study is that it uses a large dataset collected from multiple institutions over multiple semesters. We also list threats to the validity of the study.

References

  1. Jens Bennedsen and Michael E. Caspersen. 2007. Failure rates in introductory programming. SIGCSE Bull. 39, 2 (June 2007), 32--36. DOI:https://doi.org/10.1145/1272848.1272879Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Christopher Watson and Frederick W.B. Li. 2014. Failure rates in introductory programming revisited. In Proceedings of the 2014 conference on Innovation & technology in computer science education (ITiCSE '14). Association for Computing Machinery, New York, NY, USA, 39--44. DOI:https://doi.org/10.1145/2591708.2591749Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Ken Goldman, Paul Gross, Cinda Heeren, Geoffrey L. Herman, Lisa Kaczmarczyk, Michael C. Loui, and Craig Zilles. 2010. Setting the Scope of Concept Inventories for Introductory Computing Subjects. ACM Trans. Comput. Educ. 10, 2, Article 5 (June 2010), 29 pages. DOI:https://doi.org/10.1145/1789934.1789935Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Mike Lopez, Jacqueline Whalley, Phil Robbins, and Raymond Lister. 2008. Relationships between reading, tracing and writing skills in introductory programming. In Proceedings of the Fourth international Workshop on Computing Education Research (ICER '08). Association for Computing Machinery, New York, NY, USA, 101--112. DOI:https://doi.org/10.1145/1404520.1404531Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Yizhou Qian and James Lehman. 2017. Students' Misconceptions and Other Difficulties in Introductory Programming: A Literature Review. ACM Trans. Comput. Educ. 18, 1, Article 1 (March 2018), 24 pages. DOI:https://doi.org/10.1145/3077618Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Ricardo Caceffo, Steve Wolfman, Kellogg S. Booth, and Rodolfo Azevedo. 2016. Developing a Computer Science Concept Inventory for Introductory Programming. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE '16). Association for Computing Machinery, New York, NY, USA, 364--369. DOI:https://doi.org/10.1145/2839509.2844559Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. R. Paul Wiegand, Anthony Bucci, Amruth N. Kumar, Jennifer L. Albert, and Alessio Gaspar. 2016. A Data-Driven Analysis of Informatively Hard Concepts in Introductory Programming. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE '16). Association for Computing Machinery, New York, NY, USA, 370--375. DOI:https://doi.org/10.1145/2839509.2844629Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Lisa C. Kaczmarczyk, Elizabeth R. Petrick, J. Philip East, and Geoffrey L. Herman. 2010. Identifying student misconceptions of programming. In Proceedings of the 41st ACM technical symposium on Computer science education (SIGCSE '10). Association for Computing Machinery, New York, NY, USA, 107--111. DOI:https://doi.org/10.1145/1734263.1734299Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Nell B. Dale. 2006. Most difficult topics in CS1: results of an online survey of educators. SIGCSE Bull. 38, 2 (June 2006), 49--53. DOI:https://doi.org/10.1145/1138403.1138432Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Yuliya Cherenkova, Daniel Zingaro, and Andrew Petersen. 2014. Identifying challenging CS1 concepts in a large problem dataset. In Proceedings of the 45th ACM technical symposium on Computer science education (SIGCSE '14). Association for Computing Machinery, New York, NY, USA, 695--700. DOI:https://doi.org/10.1145/2538862.2538966Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Amruth N. Kumar. 2014. A Model for Deploying Software Tutors. In Proceedings of the 2014 IEEE Sixth International Conference on Technology for Education (T4E). IEEE, Amritapuri, India, 3--9. DOI:https://doi.org/10.1109/T4E.2014.27Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Amruth N. Kumar. 2003. A Reified Interface for a Tutor on Program Debugging. In Proceedings 3rd IEEE International Conference on Advanced Technologies (ICALT '03). IEEE, Athens, Greece, 190--194. DOI:https://doi.org/10.1109/ICALT.2003.1215054Google ScholarGoogle ScholarCross RefCross Ref
  13. Amruth N. Kumar. 2006. A scalable solution for adaptive problem sequencing and its evaluation. In Proceedings of the 4th international conference on Adaptive Hypermedia and Adaptive Web-Based Systems (AH'06). Springer-Verlag, Berlin, Heidelberg, 161--171. DOI:https://doi.org/10.1007/11768012_18Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Robert J. Mislevy, John T. Behrens, Kristen E. Dicerbo, and Roy Levy. 2012. Design and discovery in educational assessment: Evidence-centered design, psychometrics, and educational data mining. Journal of educational data mining. 4, 1 (October 2012), 11--48. DOI:https://doi.org/10.5281/zenodo.3554641Google ScholarGoogle Scholar
  15. John Dunlosky, Katherine A. Rawson, Elizabeth J. Marsh, Mitchell J. Nathan, and Daniel T. Willingham. 2013. Improving students' learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest. 14, 1 (January 2013), 4--58. DOI:https://doi.org/10.1177/1529100612453266Google ScholarGoogle ScholarCross RefCross Ref
  16. Suming Chen, Arthur Choi, and Adnan Darwiche. 2015. Computer adaptive testing using the Same-Decision Probability. In Proceedings of the Twelfth UAI Conference on Bayesian Modeling Applications Workshop - Volume 1565 (BMAW'15). CEUR-WS.org, Aachen, DEU, 34--43.Google ScholarGoogle Scholar
  17. Lea Wittie, Anastasia Kurdia, and Meriel Huggard. 2017. Developing a concept inventory for computer science 2. In Proceedings of the 2017 IEEE Frontiers in Education Conference (FIE '17). IEEE, Indianapolis, IN, USA, 1--4. DOI:https://doi.org/10.1109/FIE.2017.8190459Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Sally Hamouda, Stephen H. Edwards, Hicham G. Elmongui, Jeremy V. Ernst, and Clifford A. Shaffer. 2017. A basic recursion concept inventory. Computer Science Education. 27, 2 (April 2017), 121--148. DOI:https://doi.org/https://doi.org/10.1080/08993408.2017.1414728Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. An Empirical Analysis of Code-Tracing Concepts

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        ITiCSE '22: Proceedings of the 27th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 1
        July 2022
        686 pages
        ISBN:9781450392013
        DOI:10.1145/3502718

        Copyright © 2022 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 7 July 2022

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate552of1,613submissions,34%

        Upcoming Conference

        ITiCSE 2024

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader