skip to main content
10.1145/3105726.3106188acmconferencesArticle/Chapter ViewAbstractPublication PagesicerConference Proceedingsconference-collections
research-article

Taking Advantage of Scale by Analyzing Frequent Constructed-Response, Code Tracing Wrong Answers

Published: 14 August 2017 Publication History

Abstract

Constructed-response, code-tracing questions ("What would Python print?") are good formative assessments. Unlike selected-response questions simply marked correct or incorrect, a constructed wrong answer can provide information on a student's particular difficulty. However, constructed-response questions are resource-intensive to grade manually, and machine grading yields only correct/incorrect information. We analyzed incorrect constructed responses from code-tracing questions in an introductory computer science course to investigate whether a small subsample of such responses could provide enough information to make inspecting the subsample worth the effort, and if so, how best to choose this subsample. In addition, we sought to understand what insights into student difficulties could be gained from such an analysis.
We found that ~5% of the most frequently given wrong answers cover ~60% of the wrong constructed responses. Inspecting these wrong answers, we found similar misconceptions as those in prior work, additional difficulties not identified in prior work regarding language-specific constructs and data structures, and non-misconception "slips" that cause students to get questions wrong, such as syntax errors, sloppy reading/writing.
Our methodology is much less time-consuming than full manual inspection, yet yields new and durable insight into student difficulties that can be used for several purposes, including expanding a concept inventory, creating summative assessments, and creating effective distractors for selected-response assessments.

References

[1]
John D Bransford, Ann L Brown, and Rodney R Cocking. 1999. How people learn: Brain, mind, experience, and school. National Academy Press.
[2]
John Seely Brown and Kurt VanLehn. 1980. Repair theory: A generative theory of bugs in procedural skills. Cognitive science 4, 4 (1980), 379--426.
[3]
Neil C.C. Brown and Amjad Altadmri. 2014. Investigating Novice Programming Mistakes: Educator Beliefs vs. Student Data. In Proceedings of the Tenth Annual Conference on International Computing Education Research (ICER '14). ACM, New York, NY, USA, 43--50.
[4]
Adam S. Carter, Christopher D. Hundhausen, and Olusola Adesope. 2015. The Normalized Programming State Model: Predicting Student Performance in Computing Courses Based on Programming Behavior. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research (ICER '15). ACM, New York, NY, USA, 141--150. 2787622.2787710
[5]
Michael Clancy. 2004. Misconceptions and attitudes that interfere with learning to program. Computer science education research (2004), 85--100.
[6]
Matthew C. Jadud and Brian Dorn. 2015. Aggregate Compilation Behavior: Findings and Implications from 27,698 Users. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research (ICER '15). ACM, New York, NY, USA, 131--139. 2787622.2787718
[7]
W Lewis Johnson, Stephen Draper, and Elliot Soloway. 1983. Classifying Bugs is a Tricky Business. Technical Report. DTIC Document.
[8]
W. L. Johnson and E. Soloway. 1985. PROUST: Knowledge-Based Program Understanding. IEEE Transactions on Software Engineering SE-11, 3 (March 1985), 267--275. [9] Antti-Jussi Lakanen, Vesa Lappalainen, and Ville Isomöttönen. 2015. Revisiting Rainfall to Explore Exam Questions and Performance on CS1. In Proceedings of the 15th Koli Calling Conference on Computing Education Research (Koli Calling '15). ACM, New York, NY, USA, 40--49.
[9]
Raymond Lister, Beth Simon, Errol Thompson, Jacqueline L. Whalley, and Christine Prasad. 2006. Not Seeing the Forest for the Trees: Novice Programmers and the SOLO Taxonomy. In Proceedings of the 11th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education (ITICSE '06). ACM, New York, NY, USA, 118--122.
[10]
Matthew B Miles, A Michael Huberman, and Johnny Saldana. 2013. Qualitative data analysis: A methods sourcebook. SAGE Publications, Incorporated.
[11]
Craig S. Miller and Amber Settle. 2016. Some Trouble with Transparency: An Analysis of Student Errors with Object-oriented Python. In Proceedings of the 2016 ACM Conference on International Computing Education Research (ICER '16). ACM, New York, NY, USA, 133--141.
[12]
Michael C Rodriguez. 2005. Three options are optimal for multiple-choice items: A meta-analysis of 80 years of research. Educational Measurement: Issues and Practice 24, 2 (2005), 3--13.
[13]
Otto Seppälä, Petri Ihantola, Essi Isohanni, Juha Sorva, and Arto Vihavainen. 2015. Do We Know How Difficult the Rainfall Problem is?. In Proceedings of the 15th Koli Calling Conference on Computing Education Research (Koli Calling '15). ACM, New York, NY, USA, 87--96.
[14]
Simon and Susan Snowdon. 2014. Multiple-choice vs Free-text Code-explaining Examination Questions. In Proceedings of the 14th Koli Calling International Conference on Computing Education Research (Koli Calling '14). ACM, New York, NY, USA, 91--97.
[15]
Teemu Sirkiä and Juha Sorva. 2012. Exploring Programming Misconceptions: An Analysis of Student Mistakes in Visual Program Simulation Exercises. In Proceedings of the 12th Koli Calling International Conference on Computing Education Research (Koli Calling '12). ACM, New York, NY, USA, 19--28.
[16]
Juha Sorva and others. 2012. Visual program simulation in introductory programming education. Aalto University.
[17]
James C. Spohrer and Elliot Soloway. 1989. Simulating Student Programmers. In Proceedings of the 11th International Joint Conference on Artificial Intelligence - Volume 1 (IJCAI'89). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 543--549. http://dl.acm.org/citation.cfm?id=1623755.1623841
[18]
Kikumi K Tatsuoka. 1983. Rule space: An approach for dealing with misconceptions based on item response theory. Journal of educational measurement 20, 4 (1983), 345--354.
[19]
Kikumi K Tatsuoka. 1985. A probabilistic model for diagnosing misconceptions by the pattern classification approach. Journal of Educational and Behavioral Statistics 10, 1 (1985), 55--73.

Cited By

View all
  • (2023)Taking Stock of Concept Inventories in Computing Education: A Systematic Literature ReviewProceedings of the 2023 ACM Conference on International Computing Education Research - Volume 110.1145/3568813.3600120(397-415)Online publication date: 7-Aug-2023
  • (2023)Examples of Unsuccessful Use of Code Comprehension Strategies: A Resource for Developing Code Comprehension PedagogyProceedings of the 2023 ACM Conference on International Computing Education Research - Volume 110.1145/3568813.3600116(15-28)Online publication date: 7-Aug-2023
  • (2022)Supporting skill integration in an intelligent tutoring system for code tracingJournal of Computer Assisted Learning10.1111/jcal.1275739:2(477-500)Online publication date: 28-Nov-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICER '17: Proceedings of the 2017 ACM Conference on International Computing Education Research
August 2017
316 pages
ISBN:9781450349680
DOI:10.1145/3105726
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 14 August 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. code-tracing questions
  2. constructed-response questions
  3. education
  4. formative assessments
  5. introductory computer science
  6. massive courses
  7. student errors

Qualifiers

  • Research-article

Conference

ICER '17
Sponsor:
ICER '17: International Computing Education Research Conference
August 18 - 20, 2017
Washington, Tacoma, USA

Acceptance Rates

ICER '17 Paper Acceptance Rate 29 of 180 submissions, 16%;
Overall Acceptance Rate 189 of 803 submissions, 24%

Upcoming Conference

ICER 2025
ACM Conference on International Computing Education Research
August 3 - 6, 2025
Charlottesville , VA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)16
  • Downloads (Last 6 weeks)1
Reflects downloads up to 23 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Taking Stock of Concept Inventories in Computing Education: A Systematic Literature ReviewProceedings of the 2023 ACM Conference on International Computing Education Research - Volume 110.1145/3568813.3600120(397-415)Online publication date: 7-Aug-2023
  • (2023)Examples of Unsuccessful Use of Code Comprehension Strategies: A Resource for Developing Code Comprehension PedagogyProceedings of the 2023 ACM Conference on International Computing Education Research - Volume 110.1145/3568813.3600116(15-28)Online publication date: 7-Aug-2023
  • (2022)Supporting skill integration in an intelligent tutoring system for code tracingJournal of Computer Assisted Learning10.1111/jcal.1275739:2(477-500)Online publication date: 28-Nov-2022
  • (2021)Tool-Aided Loop Invariant Development: Insights into Student Conceptions and DifficultiesProceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 110.1145/3430665.3456351(387-393)Online publication date: 26-Jun-2021
  • (2020)Beyond binary correctness: Classification of students’ answers in learning systemsUser Modeling and User-Adapted Interaction10.1007/s11257-020-09265-530:5(867-893)Online publication date: 1-Nov-2020
  • (2019)Measuring Instruction Comprehension by Mining Memory Traces for Early Formative Feedback in Java CoursesProceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education10.1145/3304221.3325529(105-111)Online publication date: 2-Jul-2019
  • (2019)From Clusters to ContentProceedings of the 50th ACM Technical Symposium on Computer Science Education10.1145/3287324.3287459(780-786)Online publication date: 22-Feb-2019
  • (2019)Student Knowledge and MisconceptionsThe Cambridge Handbook of Computing Education Research10.1017/9781108654555.028(773-800)Online publication date: 15-Feb-2019
  • (2019)The Cambridge Handbook of Computing Education Research10.1017/9781108654555Online publication date: 15-Feb-2019
  • (2018)Giving hints is complicated: understanding the challenges of an automated hint system based on frequent wrong answersProceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education10.1145/3197091.3197102(45-50)Online publication date: 2-Jul-2018

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media