Skip to main content

Advertisement

Log in

Unifying Computer-Based Assessment Across Conceptual Instruction, Problem-Solving, and Digital Games

  • Origninal research
  • Published:
Technology, Knowledge and Learning Aims and scope Submit manuscript

Abstract

As students work through online learning systems such as the Reasoning Mind blended learning system, they often are not confined to working within a single educational activity; instead, they work through various different activities such as conceptual instruction, problem-solving items, and fluency-building games. However, most work on assessing student knowledge using methods such as Bayesian Knowledge Tracing has focused only on modeling learning in only one context or activity, even when the same skill is encountered in multiple different activities. We investigate ways in which student learning can be modeled across activities, towards understanding the relationship between different activities and which approaches are relatively more successful at integrating information across activities. However, we find that integrating data across activities does not improve predictive power relative to using data from just one activity. This suggests that seemingly identical skills in different activities may actually be cognitively different for students.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Archibald, T. N., & Poltrack, J. (2011). ADL: Preliminary systems and content integration research within the next generation learning environment. In The Interservice/Industry Training, Simulation & Education Conference (I/ITSEC) (Vol. 2011, No. 1). National Training Systems Association.

  • Baker, R. S. J. D., & Clarke-Midura, J. (2013). Predicting successful inquiry learning in a virtual performance assessment for science. In Proceedings of the 21st international conference on user modeling, adaptation, and personalization (pp. 203–214).

  • Baker, R. S. J. D., Corbett, A. T., & Aleven, V. (2008). Improving contextual models of guessing and slipping with a truncated training set. In Proceedings of the 1st international conference on educational data mining (pp. 67–76).

  • Baker, R. S. J. D., Corbett, A. T., Gowda, S. M., Wagner, A. Z., MacLaren, B. M., et al. (2010). Contextual slip and prediction of student performance after use of an intelligent tutor. In Proceedings of the 18th annual conference on user modeling, adaptation, and personalization (pp. 52–63).

  • Baker, R. S. J. D., Gowda, S. M., & Corbett, A. T. (2011). Automatically detecting a student’s preparation for future learning: Help use is key. In Proceedings of the 4th international conference on educational data mining (pp. 179–188).

  • Beck, J. E., & Chang, K.-M. (2007). Identifiability: A fundamental problem of student modeling. In Proceedings of the 11th international conference on user modeling (UM 2007).

  • Bloom, B. S. (1968). Learning for mastery. Evaluation Comment, 1(2), 1–12.

    Google Scholar 

  • Cen, H., Koedinger, K., & Junker, B. (2007). Is over practice necessary? Improving learning efficiency with the cognitive tutor through educational data mining. In R. Luckin & K. Koedinger (Eds.), Proceedings of the 13th international conference on artificial intelligence in education (pp. 511–518). Amsterdam: IOS Press.

    Google Scholar 

  • Chen, Z. (1996). Children’s analogical problem solving: The effects of superficial, structural, and procedural similarity. Journal of Experimental Child Psychology, 62(3), 410–431.

  • Corbett, A. (2001). Cognitive computer tutors: Solving the two-sigma problem.  Lecture Notes in Computer Science, 2109, 137–147.

  • Corbett, A. T., & Anderson, J. R. (1995). Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modeling and User-Adapted Interaction, 4, 253–278.

    Article  Google Scholar 

  • Corbett, A. T., & Bhatnagar, A. (1997). Student modeling in the ACT programming tutor: Adjusting a procedural learning model with declarative knowledge. In A. Jemeson, C. Paris, & C. Tasso (Eds.), User Modeling: Proceedings of the Sixth International Conference (UM97) (pp. 243–254).

  • Feng, M., Heffernan, N. T., & Koedinger, K. R. (2006). Predicting state test scores better with intelligent tutoring systems: Developing metrics to measure assistance required. In Ikeda, Ashley & Chan (Eds.), Proceedings of the 8th international onference on intelligent tutoring systems (pp. 31–40). Springer-Verlag: Berlin.

  • Gentner, D., Loewenstein, J., & Thompson, L. (2003). Learning and transfer: A general role for analogical encoding. Journal of Educational Psychology, 95(2), 393–408.

  • Gick, M. L., & Holyoak, K. J. (1980). Analogical problem solving. Cognitive Psychology, 12(3), 306–355.

  • Gobert, J. D., Sao Pedro, M., Raziuddin, J., & Baker, R. (2013). From log files to assessment metrics: Measuring students’ science inquiry skills using educational data mining. Journal of the Learning Sciences, 22(4), 521–563.

    Google Scholar 

  • Greiff, S., & Funke, J. (2009). Measuring complex problem solving: The MicroDYN approach. In F. Scheuermann (Ed.), The transition to computer-based assessment: Lessons learned from largescale surveys and implications for testing. Luxembourg: Office for Official Publications of the European Communities.

    Google Scholar 

  • Hanley, J., & McNeil, B. (1980). The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143, 29–36.

  • Khachatryan, G., Romashov, A., Khachatryan, A., Gaudino, S., Khachatryan, J., Guarian, K., & Yufa, N. (in press). Reasoning Mind Genie 2: An intelligent learning system as a vehicle for international transfer of instructional methods in mathematics. International Journal of Artificial Intelligence in Education.

  • Koedinger, K. R. (2002). Toward evidence for instructional design principles: Examples from cognitive tutor Math 6. In Proceedings of PME-NA XXXIII (the North American Chapter of the international group for the psychology of mathematics education).

  • Koedinger, K. R., Corbett, A. C., & Perfetti, C. (2012). The Knowledge-Learning-Instruction (KLI) framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive Science, 36(5), 757–798.

    Article  Google Scholar 

  • Lomas, D., Ching, D., Stampfer, E., Sandoval, M., & Koedinger, K. (2012). Battleship numberline: A digital game for improving estimation accuracy on fraction number lines. In Conference of the American Education Research Association (AERA).

  • Maass, J. K., & Pavlik, P. I. (2013) Using learner modeling to determine effective conditions of learning for optimal transfer. In Proceedings of the 16th international conference on artificial intelligence in education, Memphis, TN (pp. 189–198).

  • McQuiggan, S. W., Rowe, J. P., Lee, S., & Lester, J. C. (2008). Story-based learning: The impact of narrative on learning experiences and outcomes. In Proceedings of the 9th international conference on intelligent tutoring systems (pp. 530–539).

  • Milgram, R. J. (2005). The mathematics pre-service teachers need to know. Retrieved from ftp://math.stanford.edu/pub/papers/milgram/FIE-book.pdf.

  • Molnár, G., Greiff, S., & Csapó, B. (2013). Inductive reasoning, domain specific and complex problem solving: Relations and development. Thinking Skills and Creativity, 9, 35–45.

    Article  Google Scholar 

  • Pardos, Z. A., Baker, R. S. J.d., Gowda, S. M., & Heffernan, N. T. (2011). The sum is greater than the parts: Ensembling models of student knowledge in educational software. SIGKDD Explorations, 13(2), 37–44.

    Article  Google Scholar 

  • Pardos, Z. A., Baker, R. S. J. D., San Pedro, M. O. C. Z., Gowda, S. M., & Gowda, S. M. (2013) Affective states and state tests: Investigating how affect throughout the school year predicts end of year learning outcomes. In Proceedings of the 3rd international conference on learning analytics and knowledge (pp. 117–124). Washington, DC: Association for Computing Machinery.

  • Pavlik, P. L., & Anderson, J. R. (2011). Using a model to compute the optimal schedule of practice. Journal of Experimental Psychology: Applied, 14(2), 101–117.

    Google Scholar 

  • Quellmalz, E., Timms, M., & Schneider, S. (2009). Assessment of student learning in science simulations and games. Washington, DC: National Research Council Report.

    Google Scholar 

  • Razzaq, L., Feng, M., Nuzzo-Jones, G., Heffernan, N. T., Koedinger, K. R., Junker, B., et al. (2005). The Assistment project: Blending assessment and assisting. In C. K. Looi, G. McCalla, B. Bredeweg, & J. Breuker (Eds.), Proceedings of the 12th artificial intelligence in education (pp. 555–562). Amsterdam: ISO Press.

    Google Scholar 

  • Reye, J. (2004). Student modeling based on belief networks. International Journal of Artificial Intelligence in Education, 14, 1–33.

    Google Scholar 

  • Rittle-Johnson, B., & Star, J. R. (2007). Does comparing solution methods facilitate conceptual and procedural knowledge? An experimental study on learning to solve equations. Journal of Educational Psychology, 99(3), 561–574.

  • Rittle-Johnson, B., & Star, J. R. (2009). Compared with what? The effects of different comparisons on conceptual knowledge and procedural flexibility for equation solving. Journal of Educational Psychology, 101(3), 529–544.

  • San Pedro, M. O. Z., Baker, R. S. J. D., Bowers, A. J., & Heffernan, N. T. (2013a) Predicting college enrollment from student interaction with an intelligent tutoring system in middle school. In Proceedings of the 6th international conference on educational data mining (pp. 177–184). Worcester, MA: International Educational Data Mining Society.

  • Sao Pedro, M., Baker, R., & Gobert, J. (2013b). Incorporating Scaffolding and Tutor context into Bayesian Knowledge Tracing to predict inquiry skill acquisition. In Proceedings of the 6th international conference on educational data mining (pp. 185–192).

  • Sao Pedro, M. A., Baker, R. S. J. D., Gobert, J., Montalvo, O., & Nakama, A. (2013c). Leveraging machine-learned detectors of systematic inquiry behavior to estimate and predict transfer of inquiry skill. User Modeling and User-Adapted Interaction, 23(1), 1–39.

    Article  Google Scholar 

  • Schofield, J. W. (1995). Computers and classroom culture. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Schwonke, R., Renkl, A., Krieg, C., Wittwer, J., Aleven, V., & Salden, R. (2009). The worked-example effect: Not an artefact of lousy control conditions. Computers in Human Behavior, 25, 258–266.

    Article  Google Scholar 

  • Tatto, M. T., Schwille, J. S., Senk, S., Ingvarson, L. C., Peck, R., & Rowley, G. L. (2009). Teacher education and development study in mathematics (TEDS-M): Conceptual framework. Amsterdam: International Association for the Evaluation of Educational Achievement.

    Google Scholar 

  • Wüstenberg, S., Greiff, S., & Funke, J. (2012). Complex problem solving. More than reasoning? Intelligence, 40, 1–14.

    Article  Google Scholar 

Download references

Acknowledgments

We thank support from the Bill and Melinda Gates Foundation, and also thank George Khachatryan for valuable suggestions and comments, and Belinda Yew for assistance in literature review.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William L. Miller.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Miller, W.L., Baker, R.S. & Rossi, L.M. Unifying Computer-Based Assessment Across Conceptual Instruction, Problem-Solving, and Digital Games. Tech Know Learn 19, 165–181 (2014). https://doi.org/10.1007/s10758-014-9225-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10758-014-9225-5

Keywords

Navigation