ABSTRACT
Which code-tracing concepts are introductory programming students likely to learn from classroom instruction and which ones need additional problem-solving practice to master? Are there relationships among programming concepts that can be used to build adaptive assessment instruments? To answer these questions, we analyzed the data collected over several semesters by a suite of code-tracing tutors called problets, that administered pre-test, practice, post-test protocol. Each tutor covered a single programming topic, which consisted of 9-25 concepts. For each concept, we used the pretest data to calculate the probability that students knew the concept before using the tutor. Using a weighted average of the concept probabilities, we found that students had learned some topics more than others: if/if-else (0.85), function behavior (0.76), arrays (0.73), while (0.7), for (0.69), switch (0.67), and debugging functions (0.55). Some of the concepts on which students needed additional practice included bugs, nested loops and back-to-back loops. Expressions, even when used in novel contexts, were not challenging for students. We built a Bayesian network for each topic based on conditional probabilities to discover the concepts that must be covered, and those whose coverage is redundant in the presence of other concepts. A strength of this empirical study is that it uses a large dataset collected from multiple institutions over multiple semesters. We also list threats to the validity of the study.
- Jens Bennedsen and Michael E. Caspersen. 2007. Failure rates in introductory programming. SIGCSE Bull. 39, 2 (June 2007), 32--36. DOI:https://doi.org/10.1145/1272848.1272879Google ScholarDigital Library
- Christopher Watson and Frederick W.B. Li. 2014. Failure rates in introductory programming revisited. In Proceedings of the 2014 conference on Innovation & technology in computer science education (ITiCSE '14). Association for Computing Machinery, New York, NY, USA, 39--44. DOI:https://doi.org/10.1145/2591708.2591749Google ScholarDigital Library
- Ken Goldman, Paul Gross, Cinda Heeren, Geoffrey L. Herman, Lisa Kaczmarczyk, Michael C. Loui, and Craig Zilles. 2010. Setting the Scope of Concept Inventories for Introductory Computing Subjects. ACM Trans. Comput. Educ. 10, 2, Article 5 (June 2010), 29 pages. DOI:https://doi.org/10.1145/1789934.1789935Google ScholarDigital Library
- Mike Lopez, Jacqueline Whalley, Phil Robbins, and Raymond Lister. 2008. Relationships between reading, tracing and writing skills in introductory programming. In Proceedings of the Fourth international Workshop on Computing Education Research (ICER '08). Association for Computing Machinery, New York, NY, USA, 101--112. DOI:https://doi.org/10.1145/1404520.1404531Google ScholarDigital Library
- Yizhou Qian and James Lehman. 2017. Students' Misconceptions and Other Difficulties in Introductory Programming: A Literature Review. ACM Trans. Comput. Educ. 18, 1, Article 1 (March 2018), 24 pages. DOI:https://doi.org/10.1145/3077618Google ScholarDigital Library
- Ricardo Caceffo, Steve Wolfman, Kellogg S. Booth, and Rodolfo Azevedo. 2016. Developing a Computer Science Concept Inventory for Introductory Programming. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE '16). Association for Computing Machinery, New York, NY, USA, 364--369. DOI:https://doi.org/10.1145/2839509.2844559Google ScholarDigital Library
- R. Paul Wiegand, Anthony Bucci, Amruth N. Kumar, Jennifer L. Albert, and Alessio Gaspar. 2016. A Data-Driven Analysis of Informatively Hard Concepts in Introductory Programming. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE '16). Association for Computing Machinery, New York, NY, USA, 370--375. DOI:https://doi.org/10.1145/2839509.2844629Google ScholarDigital Library
- Lisa C. Kaczmarczyk, Elizabeth R. Petrick, J. Philip East, and Geoffrey L. Herman. 2010. Identifying student misconceptions of programming. In Proceedings of the 41st ACM technical symposium on Computer science education (SIGCSE '10). Association for Computing Machinery, New York, NY, USA, 107--111. DOI:https://doi.org/10.1145/1734263.1734299Google ScholarDigital Library
- Nell B. Dale. 2006. Most difficult topics in CS1: results of an online survey of educators. SIGCSE Bull. 38, 2 (June 2006), 49--53. DOI:https://doi.org/10.1145/1138403.1138432Google ScholarDigital Library
- Yuliya Cherenkova, Daniel Zingaro, and Andrew Petersen. 2014. Identifying challenging CS1 concepts in a large problem dataset. In Proceedings of the 45th ACM technical symposium on Computer science education (SIGCSE '14). Association for Computing Machinery, New York, NY, USA, 695--700. DOI:https://doi.org/10.1145/2538862.2538966Google ScholarDigital Library
- Amruth N. Kumar. 2014. A Model for Deploying Software Tutors. In Proceedings of the 2014 IEEE Sixth International Conference on Technology for Education (T4E). IEEE, Amritapuri, India, 3--9. DOI:https://doi.org/10.1109/T4E.2014.27Google ScholarDigital Library
- Amruth N. Kumar. 2003. A Reified Interface for a Tutor on Program Debugging. In Proceedings 3rd IEEE International Conference on Advanced Technologies (ICALT '03). IEEE, Athens, Greece, 190--194. DOI:https://doi.org/10.1109/ICALT.2003.1215054Google ScholarCross Ref
- Amruth N. Kumar. 2006. A scalable solution for adaptive problem sequencing and its evaluation. In Proceedings of the 4th international conference on Adaptive Hypermedia and Adaptive Web-Based Systems (AH'06). Springer-Verlag, Berlin, Heidelberg, 161--171. DOI:https://doi.org/10.1007/11768012_18Google ScholarDigital Library
- Robert J. Mislevy, John T. Behrens, Kristen E. Dicerbo, and Roy Levy. 2012. Design and discovery in educational assessment: Evidence-centered design, psychometrics, and educational data mining. Journal of educational data mining. 4, 1 (October 2012), 11--48. DOI:https://doi.org/10.5281/zenodo.3554641Google Scholar
- John Dunlosky, Katherine A. Rawson, Elizabeth J. Marsh, Mitchell J. Nathan, and Daniel T. Willingham. 2013. Improving students' learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest. 14, 1 (January 2013), 4--58. DOI:https://doi.org/10.1177/1529100612453266Google ScholarCross Ref
- Suming Chen, Arthur Choi, and Adnan Darwiche. 2015. Computer adaptive testing using the Same-Decision Probability. In Proceedings of the Twelfth UAI Conference on Bayesian Modeling Applications Workshop - Volume 1565 (BMAW'15). CEUR-WS.org, Aachen, DEU, 34--43.Google Scholar
- Lea Wittie, Anastasia Kurdia, and Meriel Huggard. 2017. Developing a concept inventory for computer science 2. In Proceedings of the 2017 IEEE Frontiers in Education Conference (FIE '17). IEEE, Indianapolis, IN, USA, 1--4. DOI:https://doi.org/10.1109/FIE.2017.8190459Google ScholarDigital Library
- Sally Hamouda, Stephen H. Edwards, Hicham G. Elmongui, Jeremy V. Ernst, and Clifford A. Shaffer. 2017. A basic recursion concept inventory. Computer Science Education. 27, 2 (April 2017), 121--148. DOI:https://doi.org/https://doi.org/10.1080/08993408.2017.1414728Google ScholarCross Ref
Index Terms
- An Empirical Analysis of Code-Tracing Concepts
Recommendations
Solving Code-tracing Problems and its Effect on Code-writing Skills Pertaining to Program Semantics
ITiCSE '15: Proceedings of the 2015 ACM Conference on Innovation and Technology in Computer Science EducationAn earlier study had found that solving code tracing problems helped improve code writing skills of students. But, given the instruments used in the earlier study, the improvement in code writing pertained primarily to language syntax. A follow-up ...
A study of the influence of code-tracing problems on code-writing skills
ITiCSE '13: Proceedings of the 18th ACM conference on Innovation and technology in computer science educationA study was conducted to find out whether solving code tracing problems helps improve code writing skills of students. Students were asked to answer a code writing quiz before and after a problem-solving session on code tracing. Increase in the score ...
Long Term Retention of Programming Concepts Learned Using Tracing Versus Debugging Tutors
Artificial Intelligence in EducationAbstractWe studied long-term retention of the concepts that introductory programming students learned using two software tutors on tracing the behavior of functions and debugging functions. Whereas the concepts covered by the tutor on the behavior of ...
Comments