Skip to main content

Learning What Works in ITS from Non-traditional Randomized Controlled Trial Data

  • Conference paper
Intelligent Tutoring Systems (ITS 2010)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 6095))

Included in the following conference series:

Abstract

The traditional, well established approach to finding out what works in education research is to run a randomized controlled trial (RCT) using a standard pretest and posttest design. RCTs have been used in the intelligent tutoring community for decades to determine which questions and tutorial feedback work best. Practically speaking, however, ITS creators need to make decisions on what content to deploy without the benefit of having run an RCT in advance. Additionally, most log data produced by an ITS is not in a form that can easily be evaluated with traditional methods. As a result, there is much data produced by tutoring systems that we would like to learn from but are not. In prior work we introduced a potential solution to this problem: a Bayesian networks method that could analyze the log data of a tutoring system to determine which items were most effective for learning among a set of items of the same skill. The method was validated by way of simulations. In this work we further evaluate the method by applying it to real world data from 11 experiment datasets that investigate the effectiveness of various forms of tutorial help in a web based math tutoring system. The goal of the method was to determine which questions and tutorial strategies cause the most learning. We compared these results with a more traditional hypothesis testing analysis, adapted to our particular datasets. We analyzed experiments in mastery learning problem sets as well as experiments in problem sets that, even though they were not planned RCTs, took on the standard RCT form. We found that the tutorial help or item chosen by the Bayesian method as having the highest rate of learning agreed with the traditional analysis in 9 out of 11 of the experiments. The practical impact of this work is an abundance of knowledge about what works that can now be learned from the thousands of experimental designs intrinsic in datasets of tutoring systems that assign items in a random order.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Pardos, Z.A., Heffernan, N.T.: Detecting the Learning Value of Items in a Randomized Problem Set. In: Dimitrova, Mizoguchi, du Boulay, Graesser (eds.) Proceedings of the 13th International Conference on Artificial Intelligence in Education, pp. 499–506. IOS Press, Amsterdam (2009)

    Google Scholar 

  2. Corbett, A.T.: Cognitive computer tutors: solving the two-sigma problem. In: Bauer, M., Gmytrasiewicz, P.J., Vassileva, J. (eds.) UM 2001. LNCS (LNAI), vol. 2109, pp. 137–147. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  3. Corbett, A.T., Anderson, J.R.: Knowledge tracing: modeling the acquisition of procedural knowledge. User Modeling and User-Adapted Interaction 4(4), 253–278 (1995)

    Article  Google Scholar 

  4. Koedinger, K.R., Anderson, J.R., Hadley, W.H., Mark, M.A.: Intelligent tutoring goes to school in the big city. International Journal of Artificial Intelligence in Education 8, 30–43 (1997)

    Google Scholar 

  5. Razzaq, L., Heffernan, N.T.: To Tutor or Not to Tutor: That is the Question. In: Dimitrova, Mizoguchi, du Boulay, Graesser (eds.) Proceedings of the 13th International Conference on Artificial Intelligence in Education, pp. 457–464. IOS Press, Amsterdam (2009)

    Google Scholar 

  6. Razzaq, L., Heffernan, N.T.: Scaffolding vs. hints in the Assistment system. In: Ikeda, M., Ashley, K.D., Chan, T.-W. (eds.) ITS 2006. LNCS, vol. 4053, pp. 635–644. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  7. Kim, R., Weitz, R., Heffernan, N., Krach, N.: Tutored Problem Solving vs. “Pure”: Worked Examples. In: Taatgen, N.A., van Rijn, H. (eds.) Proceedings of the 31st Annual Conference of the Cognitive Science Society, Cognitive Science Society, Austin (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Pardos, Z.A., Dailey, M.D., Heffernan, N.T. (2010). Learning What Works in ITS from Non-traditional Randomized Controlled Trial Data. In: Aleven, V., Kay, J., Mostow, J. (eds) Intelligent Tutoring Systems. ITS 2010. Lecture Notes in Computer Science, vol 6095. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13437-1_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-13437-1_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-13436-4

  • Online ISBN: 978-3-642-13437-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics