Abstract
This paper presents a within-subject, randomized experiment to compare automated interventions for teaching vocabulary to young readers using Project LISTEN’s Reading Tutor. The experiment compared three conditions: no explicit instruction, a quick definition, and a quick definition plus a post-story battery of extended instruction based on a published instructional sequence for human teachers. A month long study with elementary school children indicates that the quick instruction, which lasts about seven seconds, has immediate effects on learning gains that did not persist. Extended instruction which lasted about thirty seconds longer than the quick instruction had a persistent effect and produced gains on a posttest one week later.
Similar content being viewed by others
References
Beck, I.L., McKeown, M.G., Kucan, L.: Bringing Words to Life. The Guilford Press, New York (2002)
Mostow, J., Aist, G.: Evaluating tutors that listen: An overview of Project LISTEN. In: Forbus, K., Feltovich, P. (eds.) Smart Machines in Education, pp. 169–234. MIT/AAAI Press, Menlo Park (2001)
Menard, S.: Applied Logistic Regression Analysis. Quantitative Applications in the Social Sciences 106 (1995)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Heiner, C., Beck, J., Mostow, J. (2006). Automated Vocabulary Instruction in a Reading Tutor. In: Ikeda, M., Ashley, K.D., Chan, TW. (eds) Intelligent Tutoring Systems. ITS 2006. Lecture Notes in Computer Science, vol 4053. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11774303_86
Download citation
DOI: https://doi.org/10.1007/11774303_86
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-35159-7
Online ISBN: 978-3-540-35160-3
eBook Packages: Computer ScienceComputer Science (R0)