skip to main content
10.1145/3450613.3456833acmconferencesArticle/Chapter ViewAbstractPublication PagesumapConference Proceedingsconference-collections
research-article
Public Access

Progression Trajectory-Based Student Modeling for Novice Block-Based Programming

Published:21 June 2021Publication History

ABSTRACT

Block-based programming environments are widely used in computer science education. However, these environments pose significant challenges for student modeling. Given a series of problem-solving actions taken by students in block-based programming environments, student models need to accurately infer problem-solving students’ programming abilities in real time to enable adaptive feedback and hints that are tailored to students’ abilities. While student models for block-based programming offer the potential to support student-adaptivity, creating student models for these environments is challenging because students can develop a broad range of solutions to a given programming activity. To address these challenges, we introduce a progression trajectory-based student modeling framework for modeling novice student block-based programming across multiple learning activities. Student trajectories utilize a time series representation that employs code analysis to incrementally compare student programs to expert solutions as students undertake block-based programming activities. This paper reports on a study in which progression trajectories were collected from more than 100 undergraduate students engaging in a series of block-based programming activities in an introductory computer science course. Using progression trajectory-based student modeling, we identified three distinct trajectory classes: Early Quitting, High Persistence, and Efficient Completion. Analysis of these trajectories revealed that they exhibit significantly different characteristics with respect to students’ actions and can be used to accurately predict students’ programming behaviors on future programming activities compared to competing baseline models. The findings suggest that progression trajectory-based student models can accurately model students’ block-based programming problem solving and hold potential for informing adaptive support in block-based programming environments.

Skip Supplemental Material Section

Supplemental Material

UMAP21-lp13131.mp4

mp4

222.4 MB

References

  1. Jordan Barria-Pineda, Julio Guerra-Hollstein, and Peter Brusilovsky. 2018. A fine-grained open learner model for an introductory programming course. In Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization, 53–61.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Jens Bennedsen and Michael E Caspersen. 2019. Failure rates in introductory programming: 12 years later. ACM Inroads 10, 2 (2019), 30–36.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Paulo Blikstein. 2011. Using learning analytics to assess students’ behavior in open-ended programming tasks. In Proceedings of the 1st International Conference on Learning Analytics and Knowledge, 110–116.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Paulo Blikstein, Marcelo Worsley, Chris Piech, Mehran Sahami, Steven Cooper, and Daphne Koller. 2014. Programming pluralism: Using learning analytics to detect patterns in the learning of computer programming. Journal of Learning Science 23, 4 (2014), 561–599.Google ScholarGoogle ScholarCross RefCross Ref
  5. Eric Brill and Robert C Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, 286–293.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Tyne Crow, Andrew Luxton-Reilly, and Burkhard Wuensche. 2018. Intelligent tutoring systems for programming education: a systematic review. In Proceedings of the 20th Australasian Computing Education Conference, 53–62.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Nicholas Diana, Michael Eagle, John Stamper, Shuchi Grover, Marie Bienkowski, and Satabdi Basu. 2017. An instructor dashboard for real-time analytics in interactive programming assignments. In Proceedings of the Seventh International Learning Analytics & Knowledge Conference, 272–279.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Nicholas Diana, Michael Eagle, John Stamper, Shuchi Grover, Marie Bienkowski, and Satabdi Basu. 2018. Data-driven generation of rubric criteria from an educational programming environment. In Proceedings of the 8th International Conference on Learning Analytics and Knowledge, 16–20.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Andrew Emerson, Michael Geden, Andy Smith, Eric Wiebe, Bradford Mott, Kristy Elizabeth Boyer, and James Lester. 2020. Predictive Student Modeling in Block-Based Programming Environments with Bayesian Hierarchical Models. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization, 62–70.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Andrew Emerson, Andy Smith, Fernando J. Rodriguez, Eric N. Wiebe, Bradford W. Mott, Kristy Elizabeth Boyer, and James C. Lester. 2020. Cluster-based analysis of novice coding misconceptions in block-based programming. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education (2020), 825–831. DOI:https://doi.org/10.1145/3328778.3366924Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Andrew Emerson, Andy Smith, Cody Smith, Fernando Rodríguez, Eric Wiebe, Bradford Mott, Kristy Boyer, and James Lester. 2019. Predicting Early and Often: Predictive Student Modeling for Block-Based Programming Environments. In Proceedings of the 12th International Conferece on Educational Data Mining (2019), 39–48.Google ScholarGoogle Scholar
  12. Anthony Estey, Hieke Keuning, and Yvonne Coady. 2017. Automatically classifying students in need of support by detecting changes in programming behaviour. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education, 189–194.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Neil Fraser. 2013. Blockly: A visual programming editor. URL https//code. google. com/p/blockly 42, (2013).Google ScholarGoogle Scholar
  14. Dan Garcia, Brian Harvey, and Tiffany Barnes. 2015. The beauty and joy of computing. ACM Inroads 6, 4 (2015), 71–79.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Elena L Glassman, Jeremy Scott, Rishabh Singh, Philip J Guo, and Robert C Miller. 2015. OverCode: Visualizing variation in student solutions to programming problems at scale. ACM Transactions on Computer-Human Interactions (TOCHI) 22, 2 (2015), 1–35.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Jiawei Han, Jian Pei, Behzad Mortazavi-Asl, Helen Pinto, Qiming Chen, Umeshwar Dayal, and Meichun Hsu. 2001. Prefixspan: Mining sequential patterns efficiently by prefix-projected pattern growth. In Proceedings of the 17th International Conference on Data Engineering, IEEE Washington, DC, USA, 215–224.Google ScholarGoogle Scholar
  17. Juha Helminen, Petri Ihantola, Ville Karavirta, and Lauri Malmi. 2012. How do students solve parsons programming problems? an analysis of interaction traces. In Proceedings of the Ninth Annual International Conference on International Computing Education Research, 119–126.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Nathan Henderson, Vikram Kumaran, Wookhee Min, Bradford Mott, Ziwei Wu, Danielle Boulden, Trudi Lord, Frieda Reichsman, Chad Dorsey, and Eric Wiebe. 2020. Enhancing Student Competency Models for Game-Based Learning with a Hybrid Stealth Assessment Framework. In Proceedings of the 13th International Conference on Educational Data Mining (2020), 93-103.Google ScholarGoogle Scholar
  19. Roya Hosseini, Peter Brusilovsky, Michael Yudelson, and Arto Hellas. 2017. Stereotype modeling for Problem-Solving performance predictions in MOOCs and traditional courses. In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization, 76–84.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Roya Hosseini, Arto Vihavainen, and Peter Brusilovsky. 2014. Exploring problem solving paths in a Java programming course. In Psychology of Programming Interest Group Annual Conference 2014, University of Pittsburgh, 65.Google ScholarGoogle Scholar
  21. Matthew C Jadud. 2006. Methods and tools for exploring novice compilation behaviour. In Proceedings of the Second International Workshop on Computing Education Research, 73–84.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Bo Jiang, Wei Zhao, Nuan Zhang, and Feiyue Qiu. 2019. Programming trajectories analytics in block-based programming language learning. Interactive Learning Environments 4820, (2019), 1–14. DOI:https://doi.org/10.1080/10494820.2019.1643741Google ScholarGoogle Scholar
  23. Shamya Karumbaiah, Ryan S Baker, and Valerie Shute. 2018. Predicting Quitting in Students Playing a Learning Game. In Proceedings of the 11th International Conference on Educational Data Mining (2018).Google ScholarGoogle Scholar
  24. Trupti M Kodinariya and Prashant R Makwana. 2013. Review on determining number of Cluster in K-Means Clustering. International Journal 1, 6 (2013), 90–95.Google ScholarGoogle Scholar
  25. Abe Leite and Saúl A Blanco. 2020. Effects of Human vs. Automatic Feedback on Students’ Understanding of AI Concepts and Programming Style. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education, 44–50.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Christophe Leys, Christophe Ley, Olivier Klein, Philippe Bernard, and Laurent Licata. 2013. Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median. Journal of Experimental Social Psychology 49, 4 (2013), 764–766.Google ScholarGoogle ScholarCross RefCross Ref
  27. Zitao Liu, Songfan Yang, Jiliang Tang, Neil Heffernan, and Rose Luckin. 2020. Recent advances in multimodal educational data mining in k-12 education. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 3549–3550.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Yuetian Luo and Zachary A. Pardos. 2018. Diagnosing University student subject proficiency and predicting degree completion in vector space. 32nd AAAI Conference on Artificial Intelligence 32, 1, (2018), 7920–7927.Google ScholarGoogle Scholar
  29. Andrew Luxton-Reilly, Ibrahim Albluwi, Brett A Becker, Michail Giannakos, Amruth N Kumar, Linda Ott, James Paterson, Michael James Scott, Judy Sheard, and Claudia Szabo. 2018. Introductory programming: a systematic literature review. In Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education, 55–106.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Mehak Maniktala, Christa Cody, Amy Isvik, Nicholas Lytle, Min Chi, and Tiffany Barnes. 2020. Extending the Hint Factory for the assistance dilemma: A novel, data-driven HelpNeed Predictor for proactive problem-solving help. arXiv Prepr. arXiv2010.04124 (2020).Google ScholarGoogle Scholar
  31. Victor J Marin, Tobin Pereira, Srinivas Sridharan, and Carlos R Rivero. 2017. Automated personalized feedback in introductory Java programming MOOCs. In 2017 IEEE 33rd International Conference on Data Engineering (ICDE), IEEE, 1259–1270.Google ScholarGoogle ScholarCross RefCross Ref
  32. Wookhee Min, Megan H Frankosky, Bradford W Mott, Jonathan P Rowe, Eric Wiebe, Kristy Elizabeth Boyer, and James C Lester. 2015. DeepStealth: leveraging deep learning models for stealth assessment in game-based learning environments. In Proceedings of the 17th International Conference on Artificial Intelligence in Education, Springer, 277–286.Google ScholarGoogle ScholarCross RefCross Ref
  33. Wookhee Min, Bradford Mott, Jonathan Rowe, Barry Liu, and James Lester. 2016. Player goal recognition in open-world digital games with long short-term memory networks. In Porceedings of the 25th International Joint Conference on Artificial Intelligence (2016), 2590–2596.Google ScholarGoogle Scholar
  34. Benjamin Paaßen, Claudio Gallicchio, Alessio Micheli, and Barbara Hammer. 2018. Tree edit distance learning via adaptive symbol embeddings. In Proceeding of the 35th International Conference on Machine Learning, PMLR, 3976–3985.Google ScholarGoogle Scholar
  35. Benjamin Paaßen, Barbara Hammer, Thomas William Price, Tiffany Barnes, Sebastian Gross, and Niels Pinkwart. 2017. The continuous hint factory-providing hints in vast and sparsely populated edit distance spaces. arXiv Prepr. arXiv1708.06564 (2017).Google ScholarGoogle Scholar
  36. Zachary A Pardos, Steven Tang, Daniel Davis, and Christopher Vu Le. 2017. Enabling real-time adaptivity in MOOCs with a personalized next-step recommendation framework. In Proceedings of the Fourth (2017) ACM Conference on Learning@ Scale, 23–32.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, and Vincent Dubourg. 2011. Scikit-learn: Machine learning in Python. The Journal of Machine Learning Research 12, (2011), 2825–2830.Google ScholarGoogle Scholar
  38. Radek Pelánek and Jiří Řihák. 2017. Experimental analysis of mastery learning criteria. In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization, 156–163.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. D N Perkins and Fay Martin. 1986. Fragile knowledge and neglected strategies in novice programmers. In at Empirical Studies of Programmers, 1st Workshop, Washington, DC, 213–229.Google ScholarGoogle Scholar
  40. Chris Piech, Mehran Sahami, Daphne Koller, Steve Cooper, and Paulo Blikstein. 2012. Modeling how students learn to program. In Proceedings of the 43rd ACM technical symposium on Computer Science Education, 153–160.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Thomas W Price and Tiffany Barnes. 2015. Comparing textual and block interfaces in a novice programming environment. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research, 91–99.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Thomas W Price, Yihuan Dong, and Tiffany Barnes. 2016. Generating Data-Driven Hints for Open-Ended Programming. In Proceedings of the 9th International Conference on Educational Data Mining (2016), 191-198.Google ScholarGoogle Scholar
  43. Thomas W Price, Yihuan Dong, and Dragan Lipovac. 2017. iSnap: towards intelligent tutoring in novice programming environments. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education, 483–488.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Thomas W Price, Rui Zhi, and Tiffany Barnes. 2017. Hint generation under uncertainty: The effect of hint quality on help-seeking behavior. In Proceedings of the 18th International Conference on Artificial Intelligence in Education, Springer, 311–322.Google ScholarGoogle ScholarCross RefCross Ref
  45. Alexander Renkl and Robert K Atkinson. 2003. Structuring the transition from example study to problem solving in cognitive skill acquisition: A cognitive load perspective. Educational Psychologist 38, 1 (2003), 15–22.Google ScholarGoogle ScholarCross RefCross Ref
  46. Mitchel Resnick, John Maloney, Andrés Monroy-Hernández, Natalie Rusk, Evelyn Eastmond, Karen Brennan, Amon Millner, Eric Rosenbaum, Jay Silver, and Brian Silverman. 2009. Scratch: programming for all. Communication of the ACM 52, 11 (2009), 60–67.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Kelly Rivers, Erik Harpstead, and Kenneth R Koedinger. 2016. Learning curve analysis for programming: Which concepts do students struggle with? In Proceedings of the 2016 ACM Conference on International Computing Education Research, 143–151.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Kelly Rivers and Kenneth R Koedinger. 2017. Data-driven hint generation in vast solution spaces: a self-improving python programming tutor. International Journal of Artificial Intelligence in Education 27, 1 (2017), 37–64.Google ScholarGoogle ScholarCross RefCross Ref
  49. Fernando J Rodríguez, Kimberly Michelle Price, Joseph Isaac, Kristy Elizabeth Boyer, and Christina Gardner-McCune. 2017. How block categories affect learner satisfaction with a block-based programming interface. In 2017 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), IEEE, 201–205.Google ScholarGoogle ScholarCross RefCross Ref
  50. Pavel Senin. 2008. Dynamic time warping algorithm review. Information and Computer Science Department University of Hawaii at Manoa Honolulu, USA 855, 1–23 (2008), 40.Google ScholarGoogle Scholar
  51. Melissa Stanger and Emmie Martin. 2016. The 50 Best Computer-Science and Engineering Schools in America, 2015.Google ScholarGoogle Scholar
  52. Giovanni Maria Troiano, Sam Snodgrass, Erinç Argımak, Gregorio Robles, Gillian Smith, Michael Cassidy, Eli Tucker-Raymond, Gillian Puttick, and Casper Harteveld. 2019. Is my game OK Dr. Scratch? Exploring programming and computational thinking development via metrics in student-designed serious games for STEM. In Proceedings of the 18th ACM International Conference on Interaction Design and Children, 208–219.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Lisa Wang, Angela Sy, Larry Liu, and Chris Piech. 2017. Learning to represent student knowledge on programming exercises using deep learning. In Proceedings of the 10th International Conference on Educational Data Mining, EDM 2017 (2017), 324–329.Google ScholarGoogle Scholar
  54. Christiane Gresse Von Wangenheim, Jean C R Hauck, Matheus Faustino Demetrio, Rafael Pelle, Nathalia da Cruz Alves, Heliziane Barbosa, and Luiz Felipe Azevedo. 2018. CodeMaster–Automatic Assessment and Grading of App Inventor and Snap! Programs. Informatics in Education. 17, 1 (2018), 117–150.Google ScholarGoogle ScholarCross RefCross Ref
  55. Christopher Watson and Frederick W B Li. 2014. Failure rates in introductory programming revisited. In Proceedings of the 2014 Conference on Innovation & Technology in Computer Science Education, 39–44.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. David Weintrop and Uri Wilensky. 2017. Comparing block-based and text-based programming in high school computer science classrooms. ACM Transactions on Computer Education 18, 1 (2017), 1–25.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Joseph B Wiggins, Fahmid M Fahid, Andrew Emerson, Madeline Hinckle, Andy Smith, Kristy Elizabeth Boyer, Bradford Mott, Eric Wiebe, and James Lester. 2021. Exploring Novice Programmers’ Hint Requests in an Intelligent Block-Based Coding Environment. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (2021), 52-58.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Jeannette M Wing. 2014. Computational thinking benefits society. 40th Anniversary Blog on Social Issues in Computing (2014), 26.Google ScholarGoogle Scholar
  59. Rui Zhi, Thomas W Price, Samiha Marwan, Alexandra Milliken, Tiffany Barnes, and Min Chi. 2019. Exploring the impact of worked examples in a novice programming environment. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education, 98–104.Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    UMAP '21: Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization
    June 2021
    325 pages
    ISBN:9781450383660
    DOI:10.1145/3450613

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 21 June 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate162of633submissions,26%

    Upcoming Conference

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format