skip to main content
research-article
Public Access

A Framework for Using Hypothesis-Driven Approaches to Support Data-Driven Learning Analytics in Measuring Computational Thinking in Block-Based Programming Environments

Published:28 August 2017Publication History
Skip Abstract Section

Abstract

Systematic endeavors to take computer science (CS) and computational thinking (CT) to scale in middle and high school classrooms are underway with curricula that emphasize the enactment of authentic CT skills, especially in the context of programming in block-based programming environments. There is, therefore, a growing need to measure students’ learning of CT in the context of programming and also support all learners through this process of learning computational problem solving. The goal of this research is to explore hypothesis-driven approaches that can be combined with data-driven ones to better interpret student actions and processes in log data captured from block-based programming environments with the goal of measuring and assessing students’ CT skills. Informed by past literature and based on our empirical work examining a dataset from the use of the Fairy Assessment in the Alice programming environment in middle schools, we present a framework that formalizes a process where a hypothesis-driven approach informed by Evidence-Centered Design effectively complements data-driven learning analytics in interpreting students’ programming process and assessing CT in block-based programming environments. We apply the framework to the design of Alice tasks for high school CS to be used for measuring CT during programming.

References

  1. Saleema Amershi and Cristina Conati. 2009. Combining unsupervised and supervised classification to build user models for exploratory learning environments. Journal of Educational Data Mining 1, 1 (2009), 18--71.Google ScholarGoogle Scholar
  2. Ryan S. J. D. Baker and Paul Salvador Inventado. 2014. Educational data mining and learning analytics. In Cambridge Handbook of the Learning Sciences, K. Sawyer (Ed.). Cambridge University Press. Google ScholarGoogle ScholarCross RefCross Ref
  3. Ryan S. J. D. Baker and Kalina Yacef. 2009. The state of educational data mining in 2009: A review and future visions. Journal of Educational Data Mining 1, 1 (2009), 3--17.Google ScholarGoogle Scholar
  4. Satabdi Basu, Gautam Biswas, and John S. Kinnebrew. 2017. Learner modeling for adaptive scaffolding in a computational thinking-based science learning environment. User Modeling and User-Adapted Interaction 27, 1 (March 2017), 5--53. DOI:https://doi.org/10.1007/s11257-017-9187-0Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Matthew Berland, Taylor Martin, Tom Benton, Carmen Petrick Smith, and Don Davis. 2013. Using learning analytics to understand the learning pathways of novice programmers. Journal of the Learning Sciences 22, 4 (September 2013), 564--599. DOI:https://doi.org/10.1080/10508406.2013.836655Google ScholarGoogle ScholarCross RefCross Ref
  6. Marie Bienkowski, Eric B. Snow, Daisy Wise Rutstein, and Shuchi Grover. 2015. Assessment Design Patterns for Computational Thinking Practices: A First Look. Menlo Park, CA: SRI International.Google ScholarGoogle Scholar
  7. Paulo Blikstein, Marcelo Worsley, Chris Piech, Mehran Sahami, Steven Cooper, and Daphne Koller. 2014. Programming pluralism: Using learning analytics to detect patterns in the learning of computer programming. Journal of the Learning Sciences 23, 4 (October 2014), 561--599. DOI:https://doi.org/10.1080/10508406.2014.954750Google ScholarGoogle ScholarCross RefCross Ref
  8. Steve Cooper, Shuchi Grover, Mark Guzdial, and Beth Simon. 2014. A future for computing education research. Communications of the ACM 57, 11 (October 2014), 34--36. DOI:https://doi.org/10.1145/2668899Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Wanda P. Dann, Stephen Cooper, and Randy Pausch. 2009. Learning to Program with Alice. 2nd ed. Upper Saddle River, NJ: Prentice Hall.Google ScholarGoogle Scholar
  10. Jill Denner and Linda Werner. 2009. The Development of Computational Thinking Among Middle School Students Creating Computer Games. NSF Award DRL-0909733.Google ScholarGoogle Scholar
  11. Jill Denner, Linda Werner, and Eloy Ortiz. 2012. Computer games created by middle school girls: Can they be used to measure understanding of computer science concepts? Computers 8 Education 58, 1 (January 2012), 240--249. DOI:https://doi.org/10.1016/j.compedu.2011.08.006Google ScholarGoogle Scholar
  12. Michael Eagle, Nicholas Diana, John C. Stamper, Shuchi Grover, and Marie Bienkowski. (in review). Deriving insight into problem solving behavior in interactive open-ended programming environments from transactional data. Transactions on Computing Education.Google ScholarGoogle Scholar
  13. Deborah A. Fields, Michael Giang, and Yasmin Kafai. 2014. Programming in the wild: Trends in youth computational participation in the online scratch community. In Proceedings of the 9th Workshop in Primary and Secondary Computing Education (WiPSCE’14). New York: ACM, 2--11. DOI:https://doi.org/10.1145/2670757.2670768Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Deborah A. Fields, Lisa Quirke, Janell Amely, and Jason Maughan. 2016. Combining big data and thick data analyses for understanding youth learning trajectories in a summer coding camp. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education (SIGCSE’16). New York: ACM, 150--155. DOI:https://doi.org/10.1145/2839509.2844631Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Jesse Fox, Sun Joo (Grace) Ahn, Joris H. Janssen, Leo Yeykelis, Kathryn Y. Segovia, and Jeremy N. Bailenson. 2015. Avatars versus agents: A meta-analysis quantifying the effect of agency on social influence. Human--Computer Interaction 30, 5 (September 2015), 401--432. DOI:https://doi.org/10.1080/07370024.2014.921494Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Alexis Gabadinho, Gilbert Ritschard, Nicholas S. Müller, and Matthias Studer. 2011. Analyzing and visualizing state sequences in r with traminer. Journal of Statistical Software 40, 4 (2011), 1--37. DOI:https://doi.org/10.18637/jss.v040.i04Google ScholarGoogle ScholarCross RefCross Ref
  17. Janice D. Gobert, Michael A. Sao Pedro, Ryan S. J. d. Baker, Ermal Toto, and Orlando Montalvo. 2012. Leveraging educational data mining for real-time performance assessment of scientific inquiry skills within microworlds. Journal of Educational Data Mining 4, 1 (October 2012), 153--185.Google ScholarGoogle Scholar
  18. Joanna Goode. 2008. Increasing diversity in K-12 computer science: Strategies from the field. In Proceedings of the 39th SIGCSE Technical Symposium on Computer Science Education (SIGCSE’08). Portland, OR: ACM, 362--366. DOI:https://doi.org/DOI=http://dx.doi.org/10.1145/1352135.1352259 Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Joanna Goode and Gail Chapman. 2015. Exploring Computer Science, version 6. (2015). Retrieved from http://www.exploringcs.org/curriculum.Google ScholarGoogle Scholar
  20. Google Inc. and Gallup Inc. 2016. Diversity Gaps in Computer Science: Exploring the Underrepresentation of Girls, Blacks and Hispanics.Google ScholarGoogle Scholar
  21. Ralph F. Grove. 1998. Using the personal software process to motivate good programming practices. In Proceedings of the 6th Annual Conference on the Teaching of Computing and the 3rd Annual Conference on Integrating Technology into Computer Science Education: Changing the Delivery of Computer Science Education (ITiCSE’98). New York: ACM, 98--101. DOI:https://doi.org/10.1145/282991.283046Google ScholarGoogle Scholar
  22. Shuchi Grover. 2017. Assessing algorithmic and computational thinking in K-12: Lessons from a middle school classroom. In Emerging Research, Practice, and Policy on Computational Thinking, Peter J. Rich and Charles B. Hodges (Eds.). Educational Communications and Technology: Issues and Innovations. Berlin: Springer International Publishing, 269--288. DOI:https://doi.org/10.1007/978-3-319-52691-1_17Google ScholarGoogle Scholar
  23. Shuchi Grover, Marie Bienkowski, John Niekrasz, and Matthias Hauswirth. 2016. Assessing problem-solving process at scale. In Proceedings of the 3rd ACM Conference on Learning @ Scale (L@S’16). New York: ACM, 245--248. DOI:https://doi.org/10.1145/2876034.2893425Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Shuchi Grover, Stephen Cooper, and Roy Pea. 2014. Assessing computational learning in K-12. In Proceedings of the 2014 Conference on Innovation 8 Technology in Computer Science Education. ACM, 57--62. DOI:https://doi.org/10.1145/2591708.2591713Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Shuchi Grover and Roy Pea. 2013. Computational thinking in k--12 a review of the state of the field. Educational Researcher 42, 1 (January 2013), 38--43. DOI:https://doi.org/10.3102/0013189X12463051Google ScholarGoogle ScholarCross RefCross Ref
  26. Shuchi Grover, Roy Pea, and Stephen Cooper. 2015. Designing for deeper learning in a blended computer science course for middle school students. Computer Science Education 25, 2 (April 2015), 199--237. DOI:https://doi.org/10.1080/08993408.2015.1033142Google ScholarGoogle ScholarCross RefCross Ref
  27. Shuchi Grover, Roy Pea, and Stephen Cooper. 2014. Remedying misperceptions of computer science among middle school students. In Proceedings of the 45th ACM Technical Symposium on Computer Science Education (SIGCSE’14). New York: ACM, 343--348. DOI:https://doi.org/10.1145/2538862.2538934Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Kilem L. Gwet. 2014. Handbook of Inter-Rater Reliability: The Definitive Guide to Measuring the Extent of Agreement Among Raters. 4th ed. Advanced Analytics.Google ScholarGoogle Scholar
  29. Kathleen C. Haynie, Geneva D. Haertel, Andrea A. Lash, Edys S. Quellmalz, and Angela Haydel DeBarger. 2006. Reverse Engineering the NAEP Floating Pencil Task Using the PADI Design System. Menlo Park, CA: SRI International.Google ScholarGoogle Scholar
  30. Will Jernigan, Amber Horvath, Michael Lee, Margaret Burnett, Taylor Cuilty, Sandeep Kuttal, Anicia Peters, Irwin Kwan, Faezeh Bahmani, and Andrew Ko. 2015. A principled evaluation for a principled idea garden. In 2015 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC’15). 235--243. DOI:https://doi.org/10.1109/VLHCC.2015.7357222Google ScholarGoogle ScholarCross RefCross Ref
  31. Saj-Nicole A. Joni and Elliot Soloway. 1986. But my program runs! Discourse rules for novice programmers. Journal of Educational Computing Research 2, 1 (February 1986), 95--125. DOI:https://doi.org/10.2190/6E5W-AR7C-NX76-HUT2Google ScholarGoogle ScholarCross RefCross Ref
  32. K-12 Computer Science Framework. (2016). Retrieved from http://www.k12cs.org.Google ScholarGoogle Scholar
  33. Caitlin Kelleher. 2006. Motivating Programming: Using Storytelling to Make Computer Programming Attractive to Middle School Girls. Carnegie Mellon University.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Deidre Kerr and G. Chung. 2012. Identifying key features of student performance in educational video games and simulations through cluster analysis. Journal of Educational Data Mining 4, 1 (2012), 144--182.Google ScholarGoogle Scholar
  35. Kenneth R. Koedinger, Albert T. Corbett, and Charles Perfetti. 2012. The knowledge-learning-instruction framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive Science 36, 5 (April 2012), 757--798. DOI:https://doi.org/10.1111/j.1551-6709.2012.01245.xGoogle ScholarGoogle ScholarCross RefCross Ref
  36. Kyu Han Koh, Ashok Basawapatna, Vicki Bennett, and Alexander Repenning. 2010. Towards the automatic recognition of computational thinking for adaptive visual language learning. In Proceedings of the 2010 IEEE Symposium on Visual Languages and Human-Centric Computing. 59--66. DOI:https://doi.org/10.1109/VLHCC.2010.17Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. D. Midian Kurland and Roy D. Pea. 1985. Children's mental models of recursive logo programs. Journal of Educational Computing Research 1, 2 (May 1985), 235--243. DOI:https://doi.org/10.2190/JV9Y-5PD0-MX22-9J4YGoogle ScholarGoogle Scholar
  38. Marcia C. Linn, Douglas Clark, and James D. Slotta. 2003. WISE design for knowledge integration. Science Education 87, 4 (July 2003), 517--538. DOI:https://doi.org/10.1002/sce.10086Google ScholarGoogle ScholarCross RefCross Ref
  39. Orni Meerbaum-Salant, Michal Armoni, and Mordechai (Moti) Ben-Ari. 2013. Learning computer science concepts with Scratch. Computer Science Education 23, 3 (September 2013), 239--264. DOI:https://doi.org/10.1080/08993408.2013.832022Google ScholarGoogle ScholarCross RefCross Ref
  40. Robert J. Mislevy, John T. Behrens, Kristen E. Dicerbo, and Roy Levy. 2012. Design and discovery in educational assessment: Evidence-centered design, psychometrics, and educational data mining. JEDM-Journal of Educational Data Mining 4, 1 (2012), 11--48.Google ScholarGoogle Scholar
  41. Robert J. Mislevy and Geneva D. Haertel. 2006. Implications of evidence-centered design for educational testing. Educational Measurement: Issues and Practice 25, 4 (December 2006), 6--20. DOI:https://doi.org/10.1111/j.1745-3992.2006.00075.xGoogle ScholarGoogle Scholar
  42. National Academy of Education (NAE). 2013. Adaptive Educational Technologies. Washington, DC: National Academy of Education.Google ScholarGoogle Scholar
  43. Sandra Y. Okita, Jeremy Bailenson, and Daniel L. Schwartz. 2008. Mere belief in social action improves complex learning. In Proceedings of the 8th International Conference for the Learning Sciences (ICLS’08). Utrecht, The Netherlands: International Society of the Learning Sciences, 132--139.Google ScholarGoogle Scholar
  44. Seymour Papert. 1980. Mindstorms: Children, Computers, and Powerful Ideas. New York: Basic Books.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Roy D. Pea. 2004. The social and technological dimensions of scaffolding and related theoretical concepts for learning, education, and human activity. Journal of the Learning Sciences 13, 3 (July 2004), 423--451. DOI:https://doi.org/10.1207/s15327809jls1303_6Google ScholarGoogle ScholarCross RefCross Ref
  46. Roy D. Pea and D. Midian Kurland. 1984. On the cognitive effects of learning computer programming. New Ideas in Psychology 2 (1984), 137--168. Google ScholarGoogle ScholarCross RefCross Ref
  47. James W. Pellegrino and Margaret L. Hilton. 2013. Education for Life and Work: Developing Transferable Knowledge and Skills in the 21st Century. National Academies Press.Google ScholarGoogle Scholar
  48. Sarah Perez Massey-Allard, J. Butler, D. Ives, J. Bonn, D. Yee, and N. Ido Roll. 2017. Identifying productive inquiry in virtual labs using sequence mining. In Proceedings of the International Conference on Artificial Intelligence in Education. Berlin: Springer Verlag.Google ScholarGoogle Scholar
  49. Chris Piech, Jonathan Huang, Andy Nguyen, Mike Phulsuksombati, Mehran Sahami, and Leonidas Guibas. 2015. Learning program embeddings to propagate feedback on student code. In Proceedings of the 32nd International Conference on Machine Learning (ICML’15). Lille, France: JMLR.org, 1093--1102.Google ScholarGoogle Scholar
  50. Chris Piech, Mehran Sahami, Daphne Koller, Steve Cooper, and Paulo Blikstein. 2012. Modeling how students learn to program. In Proceedings of the 43rd ACM Technical Symposium on Computer Science Education (SIGCSE’12). New York: ACM, 153--160. DOI:https://doi.org/10.1145/2157136.2157182Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Alexander Repenning. 2000. AgentSheets®: An Interactive Simulation Environment with End-User Programmable Agents. Interaction.Google ScholarGoogle Scholar
  52. PACT Assessments for Exploring Computer Science. http://pact.sri.com/ecs-assessments.html.Google ScholarGoogle Scholar
  53. Cristina Romero and Sebastián Ventura. 2010. Educational data mining: A review of the state of the art. IEEE Transactions on Systems, Man, and Cybernetics 40, 6 (November 2010), 601--618. DOI:https://doi.org/10.1109/TSMCC.2010.2053532Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Elizabeth Rowe, Jodi Asbell-Clarke, and Ryan S. Baker. 2015. Serious games analytics to measure implicit science learning. In Serious Game Analytics: Methodologies for Performance Measurement, Assessment, and Improvement, C. S. Loh, Y. Sheng, and D. Ifenthaler (Eds.). Berlin: Springer International Publishing, 343--360. Google ScholarGoogle ScholarCross RefCross Ref
  55. Andre A. Rupp, Roy Levy, Kristen E. Dicerbo, Shauna J. Sweet, Aaron V. Crawford, Tiago Calico, Martin Benson, Derek Fay, Katie L. Kunze, and John T. Behrens. 2012. Putting ECD into practice: The interplay of theory and data in evidence models within a digital learning environment. JEDM-Journal of Educational Data Mining 4, 1 (October 2012), 49--110.Google ScholarGoogle Scholar
  56. Andre A. Rupp, Rebecca Nugent, and Brian Nelson. 2012. Evidence-centered design for diagnostic assessment within digital learning environments: Integrating modern psychometrics and educational data mining. JEDM-Journal of Educational Data Mining 4, 1 (October 2012), 1--10.Google ScholarGoogle Scholar
  57. Michael A. Sao Pedro, Ryan S. J. d. Baker, Janice D. Gobert, Orlando Montalvo, and Adam Nakama. 2013. Leveraging machine-learned detectors of systematic inquiry behavior to estimate and predict transfer of inquiry skill. User Modeling and User-Adapted Interaction 23, 1 (March 2013), 1--39. DOI:https://doi.org/10.1007/s11257-011-9101-0Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Valerie Shute and Matthew Ventura. 2013. Stealth Assessment: Measuring and Supporting Learning in Video Games. Cambridge, MA: MIT Press.Google ScholarGoogle ScholarCross RefCross Ref
  59. Elliot Soloway. 1986. Learning to program=learning to construct mechanisms and explanations. Communications of the ACM 29, 9 (September 1986), 850--858. DOI:https://doi.org/10.1145/6592.6594Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Elliot Soloway and James C. Spohrer. 1989. Studying the Novice Programmer. Hillsdale, NJ: L. Erlbaum Associates.Google ScholarGoogle Scholar
  61. Shauna J. Sweet and Andre A. Rupp. 2012. Using the ECD Framework to support evidentiary reasoning in the context of a simulation study for detecting learner differences in epistemic games. JEDM-Journal of Educational Data Mining 4, 1 (October 2012), 183--223.Google ScholarGoogle Scholar
  62. Ian Utting, Stephen Cooper, Michael Kölling, John Maloney, and Mitchel Resnick. 2010. Alice, greenfoot, and scratch--a discussion. ACM Transactions on Computing Education 10, 4 (November 2010), 1--11. DOI:https://doi.org/10.1145/1868358.1868364Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Linda Werner, Jill Denner, Shannon Campe, and Damon Chizuru Kawamoto. 2012. The fairy performance assessment: Measuring computational thinking in middle school. In Proceedings of the 43rd ACM Technical Symposium on Computer Science Education (SIGCSE’12). New York: ACM, 215--220. DOI:https://doi.org/10.1145/2157136.2157200Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Linda Werner, Charlie McDowell, and Jill Denner. 2013. A first step in learning analytics: Pre-processing low-level Alice logging data of middle school students. Journal of Educational Data Mining 5, 2 (2013), 11--37.Google ScholarGoogle Scholar
  65. Jeannette M. Wing. 2006. Computational thinking. Communications of the ACM 49, 3 (2006), 33--35. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Philip H. Winne and Ryan S. J. d. Baker. 2013. The potentials of educational data mining for researching metacognition, motivation and self-regulated learning. JEDM-Journal of Educational Data Mining 5, 1 (2013), 1--8.Google ScholarGoogle Scholar
  67. Benjamin Xie and Hal Abelson. 2016. Skill progression in MIT app inventor. In Proceedings of the 2016 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC’16). 213--217. DOI:https://doi.org/10.1109/VLHCC.2016.7739687Google ScholarGoogle ScholarCross RefCross Ref
  68. Benjamin Xie, Isra Shabir, and Hal Abelson. 2015. Measuring the usability and capability of app inventor to create mobile applications. In Proceedings of the 3rd International Workshop on Programming for Mobile and Touch (PROMOTO’15). New York: ACM, 1--8. DOI:https://doi.org/10.1145/2824823.2824824Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Aman Yadav, David Burkhart, Daniel Moix, Eric Snow, Padmaja Bandaru, and Lissa Clayborn. 2015. Sowing the seeds: A landscape study on assessment in secondary computer science education, comp. Science Teachers Association. New York.Google ScholarGoogle Scholar
  70. Iris Zur-Bargury, Bazil Pârv, and Dvir Lanzberg. 2013. A nationwide exam as a tool for improving a new curriculum. In Proceedings of the 18th ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE’13). New York: ACM, 267--272. DOI:https://doi.org/10.1145/2462476.2462479Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A Framework for Using Hypothesis-Driven Approaches to Support Data-Driven Learning Analytics in Measuring Computational Thinking in Block-Based Programming Environments

            Recommendations

            Comments

            Login options

            Check if you have access through your login credentials or your institution to get full access on this article.

            Sign in

            Full Access

            • Published in

              cover image ACM Transactions on Computing Education
              ACM Transactions on Computing Education  Volume 17, Issue 3
              Special Issue on Learning Analytics
              September 2017
              116 pages
              EISSN:1946-6226
              DOI:10.1145/3135995
              Issue’s Table of Contents

              Copyright © 2017 ACM

              Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

              Publisher

              Association for Computing Machinery

              New York, NY, United States

              Publication History

              • Published: 28 August 2017
              • Accepted: 1 June 2017
              • Revised: 1 May 2017
              • Received: 1 October 2016
              Published in toce Volume 17, Issue 3

              Permissions

              Request permissions about this article.

              Request Permissions

              Check for updates

              Qualifiers

              • research-article
              • Research
              • Refereed

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader