skip to main content
10.1145/3340631.3394853acmconferencesArticle/Chapter ViewAbstractPublication PagesumapConference Proceedingsconference-collections
research-article

Predictive Student Modeling in Block-Based Programming Environments with Bayesian Hierarchical Models

Published:13 July 2020Publication History

ABSTRACT

Recent years have seen a growing interest in block-based programming environments for computer science education. Although block-based programming offers a gentle introduction to coding for novice programmers, introductory computer science still presents significant challenges, so there is a great need for block-based programming environments to provide students with adaptive support. Predictive student modeling holds significant potential for adaptive support in block-based programming environments because it can identify early on when a student is struggling. However, predictive student models often make a number of simplifying assumptions, such as assuming a normal response distribution or homogeneous student characteristics, which can limit the predictive performance of models. These assumptions, when invalid, can significantly reduce the predictive accuracy of student models.

To address these issues, we introduce an approach to predictive student modeling that utilizes Bayesian hierarchical linear models. This approach explicitly accounts for individual student differences and programming activity differences by analyzing block-based programs created by students in a series of introductory programming activities. Evaluation results reveal that predictive student models that account for both the distributional and hierarchical factors outperform baseline models. These findings suggest that predictive student models based on Bayesian hierarchical modeling and representing individual differences in students can substantially improve models' accuracy for predicting student performance on post-tests. By improving the predictive performance of student models, this work holds substantial potential for improving adaptive support in block-based programming environments.

References

  1. Akram, B., Min, W., Wiebe, E., Mott, B., Boyer, K.E. and Lester, J. 2019. Assessing Middle School Students' Computational Thinking Through Programming Trajectory Analysis. In Proceedings of the 50th ACM Technical Symposium on Computer Science Education, 1269--1269.Google ScholarGoogle Scholar
  2. Akram, B., Min, W., Wiebe, E., Mott, B., Boyer, K.E. and Lester, J. 2018. Improving Stealth Assessment in Game-Based Learning with LSTM-Based Analytics. In Proceedings of the 11th International Conference on Educational Data Mining, 208--218.Google ScholarGoogle Scholar
  3. Arthurs, N., Stenhaug, B., Karayev, S. and Piech, C. 2019. Grades are Not Normal: Improving Exam Score Models Using the Logit-Normal Distribution. In Proceedings of the 12th International Conference on Educational Data Mining, 252--257.Google ScholarGoogle Scholar
  4. Bakker, B. and Heskes, T. 2004. Task Clustering and Gating for Bayesian Multitask Learning. Journal of Machine Learning Research, 4(1), 83--99.Google ScholarGoogle Scholar
  5. Blikstein, P. 2011. Using Learning Analytics to Assess Students' Behavior in Open-Ended Programming Tasks. In Proceedings of the International Conference on Learning Analytics and Knowledge, 110--116.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Corbett, A.T. and Anderson, J.R. 1994. Knowledge Tracing: Modeling the Acquisition of Procedural Knowledge. User Modelling and User-Adapted Interaction, 4, 253--278.Google ScholarGoogle ScholarCross RefCross Ref
  7. Desmarais, M.C. and Baker, R.S.J.D. 2011. A Review of Recent Advances in Learner and Skill Modeling in Intelligent Learning Environments. User Modeling and User-Adapted Interaction, 22(1--2), 9--38.Google ScholarGoogle Scholar
  8. Effenberger, T. and Pelánek, R. 2018. Towards Making Block-Based Programming Activities Adaptive. In Proceedings of the 5th Annual ACM Conference on Learning at Scale, 6--9.Google ScholarGoogle Scholar
  9. Gelman, A., Hwang, J. and Vehtari, A. 2014. Understanding Predictive Information Criteria for Bayesian models. Statistics and Computing, 24(6), 997--1016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Gelman, A. and Rubin, D.B. 1992. Inference from Iterative Simulation using Multiple Sequences. Statistical Science, 7, 457--511.Google ScholarGoogle ScholarCross RefCross Ref
  11. Grover, S., Basu, S., Bienkowski, M., Eagle, M., Diana, N. and Stamper, J. 2017. A Framework for using Hypothesis-Driven Approaches to Support Data-Driven Learning Analytics in Measuring Computational Thinking in Block-Based Programming Environments. ACM Transactions on Computing Education, 17(3), 1--25.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Huang, Y., Xu, Y. and Brusilovsky, P. 2014. Doing More with Less: Student Modeling and Performance Prediction with Reduced Content Models. In Proceedings of the 22nd International Conference on User Modeling, Adaptation and Personalization, 338--349.Google ScholarGoogle Scholar
  13. Khajah, M., Lindsey, R. V. and Mozer, M.C. 2016. How Deep is Knowledge Tracing? In Proceedings of the 9th International Conference on Educational Data Mining, 94--101.Google ScholarGoogle Scholar
  14. Mao, Y., Zhi, R., Khoshnevisan, F., Price, T.W., Barnes, T. and Chi, M. 2019. One Minute is Enough?: Early Prediction of Student Success and Event-level Difficulty during a Novice Programming Task. In Proceedings of the International Conference on Educational Data Mining, 119--128.Google ScholarGoogle Scholar
  15. Min, W., Frankosky, M., Mott, B.W., Rowe, J., Smith, P.A.M., Wiebe, E., Boyer, K. and Lester, J. 2019. DeepStealth: Game-Based Learning Stealth Assessment with Deep Neural Networks. IEEE Transactions on Learning Technologies.Google ScholarGoogle Scholar
  16. Mislevy, R.J., Behrens, J.T., Dicerbo, K.E. and Levy, R. 2012. Design and Discovery in Educational Assessment: Evidence-Centered Design, Psychometrics, and Educational Data Mining. Journal of Educational Data Mining, 4(1), 11--48.Google ScholarGoogle Scholar
  17. Piech, C., Bassen, J., Huang, J., Ganguli, S., Sahami, M., Guibas, L. and Sohl-Dickstein, J. 2015. Deep Knowledge Tracing. In Advances in Neural Information Processing Systems, 505--513.Google ScholarGoogle Scholar
  18. Plummer, M. 2003. DSC 2003 Working Papers JAGS: A Program for Analysis of Bayesian Graphical Models using Gibbs Sampling. In Proceedings of the 3rd International Conference on Distributed Statistical Computing, 1--10.Google ScholarGoogle Scholar
  19. Price, T.W. and Barnes, T. 2017. Position Paper: Block-Based Programming Should Offer Intelligent Support for Learners. In Proceedings of the 2017 IEEE Blocks and Beyond Workshop, 65--68.Google ScholarGoogle Scholar
  20. Price, T.W., Dong, Y. and Lipovac, D. 2017. iSnap: Towards Intelligent Tutoring in Novice Programming Environments. In Proceedings of the Forty-Eighth ACM Symposium on Computer Science Education, 483--488.Google ScholarGoogle Scholar
  21. Reich, B.J. and Ghosh, S.K. 2019. Bayesian Statistical Methods. CRC Press.Google ScholarGoogle Scholar
  22. Rivers, K., Harpstead, E. and Koedinger, K. 2016. Learning Curve Analysis for Programming. In Proceedings of the International Conference on Computing Education Research, 143--151.Google ScholarGoogle Scholar
  23. Rupp, A., Levy, R., Dicerbo, K.E., Sweet, S.J., Crawford, A. V., Calico, T., Benson, M., Fay, D., Kunze, K.L., Mislevy, R.J. and Behrens, J. 2012. Putting ECD into Practice: The Interplay of Theory and Data in Evidence Models within a Digital Learning Environment. Journal of Educational Data Mining, 4(1), 49--110.Google ScholarGoogle Scholar
  24. Sao Pedro, M.A., De Baker, R.S.J., Gobert, J.D., Montalvo, O. and Nakama, A. 2013. Leveraging Machine-learned Detectors of Systematic Inquiry Behavior to Estimate and Predict Transfer of Inquiry Skill. User Modelling and User-Adapted Interaction, 23(1), 1--39.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Sawyer, R., Rowe, J., Azevedo, R. and Lester, J. 2018. Modeling Player Engagement with Bayesian Hierarchical Models. In Proceedings of the 14th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 215--221.Google ScholarGoogle Scholar
  26. Shute, V.J. 2011. Stealth Assessment in Computer-Based Games to Support Learning. Computer Games and Instruction, 503--524.Google ScholarGoogle Scholar
  27. Watanabe, S. 2010. Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory. Journal of Machine Learning Research, 11, 3571--3594.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Weintrop, D. and Wilensky, U. 2017. Comparing Block-Based and Text-Based Programming in High School Computer Science Classrooms. ACM Transactions on Computing Education, 18(1), 1--25.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Wu, M., Mosse, M., Goodman, N. and Piech, C. 2018. Zero Shot Learning for Code Education: Rubric Sampling with Deep Learning Inference. In Proceedings of the AAAI Conference on Artificial Intelligence, 782--790.Google ScholarGoogle Scholar
  30. Xie, B. and Abelson, H. 2016. Skill Progression in MIT App Inventor. In Proceedings of IEEE Symposium on Visual Languages and Human-Centric Computing, 213--217.Google ScholarGoogle Scholar
  31. Yudelson, M. V., Koedinger, K.R. and Gordon, G.J. 2013. Individualized Bayesian Knowledge Tracing Models. In Proceedings of the International Conference on Artificial Intelligence in Education, 171--180.Google ScholarGoogle Scholar
  32. Yudelson, M. V., Medvedeva, O.P. and Crowley, R.S. 2008. A Multifactor Approach to Student Model Evaluation. User Modeling and User-Adapted Interaction, 18(4), 349--382.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Zhang, L., Xiong, X., Zhao, S., Botelho, A. and Heffernan, N.T. 2017. Incorporating Rich Features into Deep Knowledge Tracing. In Proceedings of the 4th (2017) ACM Conference on Learning at Scale, 169--172.Google ScholarGoogle Scholar

Index Terms

  1. Predictive Student Modeling in Block-Based Programming Environments with Bayesian Hierarchical Models

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        UMAP '20: Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization
        July 2020
        426 pages
        ISBN:9781450368612
        DOI:10.1145/3340631

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 13 July 2020

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate162of633submissions,26%

        Upcoming Conference

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader