Abstract
Scaffolding student engagement is a central challenge in adaptive learning environments. The ICAP framework defines levels of cognitive engagement with a learning activity in terms of four different engagement modes—Interactive, Constructive, Active, and Passive—and it predicts that increased cognitive engagement will yield improved learning. However, a key open question is how best to translate the ICAP theory into the design of adaptive scaffolding in adaptive learning environments. Specifically, should scaffolds be designed to require the highest levels of cognitive engagement (i.e., Interactive and Constructive modes) with every instance of feedback or knowledge component? To answer this question, in this paper we investigate a data-driven pedagogical modeling framework based on batch-constrained deep Q-networks, a type of deep reinforcement learning (RL) method, to induce policies for delivering ICAP-inspired scaffolding in adaptive learning environments. The policies are trained with log data from 487 learners as they interacted with an adaptive learning environment that provided ICAP-inspired feedback and remediation. Results suggest that adaptive scaffolding policies induced with batch-constrained deep Q-networks outperform heuristic policies that strictly follow the ICAP model without RL-based tailoring. The findings demonstrate the utility of deep RL for tailoring scaffolding for learner cognitive engagement.
Keywords
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ai, F., Chen, Y., Guo, Y., Zhao, Y., Wang, Z., Fu, G.: Concept-aware deep knowledge tracing and exercise recommendation in an online learning system. In: Proceedings of the 12th International Conference on Educational Data Mining, pp. 240–245 (2019)
Sanz Ausin, M., Maniktala, M., Barnes, T., Chi, M.: Exploring the impact of simple explanations and agency on batch deep reinforcement learning induced pedagogical policies. In: Bittencourt, I.I., Cukurova, M., Muldner, K., Luckin, R., Millán, E. (eds.) AIED 2020. LNCS (LNAI), vol. 12163, pp. 472–485. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52237-7_38
Ausin, M.S., Azizsoltani, H., Barnes, T., Chi, M.: Leveraging deep reinforcement learning for pedagogical policy induction in an intelligent tutoring system. In: Proceedings of the 12th International Conference on Educational Data Mining, pp. 168–177 (2019)
Azizsoltani, H., Jin, Y.: Unobserved is not equal to non-existent: using Gaussian processes to infer immediate rewards across contexts. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp. 1974–1980 (2019). https://doi.org/10.24963/ijcai.2019/273
Chi, M., VanLehn, K., Litman, D.: Do micro-level tutorial decisions matter: applying reinforcement learning to induce pedagogical tutorial tactics. In: Aleven, V., Kay, J., Mostow, J. (eds.) ITS 2010. LNCS, vol. 6094, pp. 224–234. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13388-6_27
Chi, M.T.H., et al.: Translating the ICAP theory of cognitive engagement into practice. Cogn. Sci. 42(6), 1777–1832 (2018). https://doi.org/10.1111/cogs.12626
Chi, M.T.H., Wylie, R.: The ICAP framework: Linking cognitive engagement to active learning outcomes. Educ. Psychol. 49(4), 219–243 (2014). https://doi.org/10.1080/00461520.2014.965823
Doroudi, S., Aleven, V., Brunskill, E.: Where’s the reward? Int. J. Artif. Intell. Educ. 29(4), 568–620 (2019). https://doi.org/10.1007/s40593-019-00187-x
Fujimoto, S., Meger, D., Precup, D.: Off-policy deep reinforcement learning without exploration. In: Proceedings of the 36th International Conference on Machine Learning, pp. 2052–2062 (2019)
Georgila, K., Core, M.G., Nye, B.D., Karumbaiah, S., Auerbach, D., Ram, M.: Using reinforcement learning to optimize the policies of an intelligent tutoring system for interpersonal skills training. In: Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems, pp. 737–745. IFAAMAS, Richland (2019). https://dl.acm.org/doi/abs/10.5555/3306127.3331763
Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence, pp. 2094–2100 (2016)
Hessel, M., et al.: Rainbow: combining improvements in deep reinforcement learning. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pp. 3215–3222 (2018)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735
Jiang, N., Li, L.: Doubly robust off-policy value evaluation for reinforcement learning. In: Proceedings of the 33rd International Conference on Machine Learning, pp. 652–661 (2016)
Ju, S., Zhou, G., Barnes, T., Chi, M.: Pick the moment: identifying critical pedagogical decisions using long-short term rewards. In: Proceedings of the 13th International Conference on Educational Data Mining, pp. 126–136 (2020)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Kodinariya, T.M., Makwana, P.R.: Review on determining number of cluster in K-Means clustering. Int. J. Adv. Res. Comput. Sci. Manag. Stud. 1(6), 90–95 (2013)
Lim, J., et al.: Active learning through discussion: ICAP framework for education in health professions. BMC Med. Educ. 19(1), Article 47 (2019). https://doi.org/10.1186/s12909-019-1901-7
Marx, J.D., Cummings, K.: Normalized change. Am. J. Phys. 75(1), 87–91 (2007). https://doi.org/10.1119/1.2372468
Mitrovic, A., Gordon, M., Piotrkowicz, A., Dimitrova, V.: Investigating the effect of adding nudges to increase engagement in active video watching. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds.) AIED 2019. LNCS (LNAI), vol. 11625, pp. 320–332. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-23204-7_27
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015). https://doi.org/10.1038/nature14236
van de Pol, J., Volman, M., Oort, F., Beishuizen, J.: The effects of scaffolding in the classroom: support contingency and student independent working time in relation to student achievement, task effort and appreciation of support. Instr. Sci. 43(5), 615–641 (2015). https://doi.org/10.1007/s11251-015-9351-z
Sawyer, R., Rowe, J., Lester, J.: Balancing learning and engagement in game-based learning environments with multi-objective reinforcement learning. In: André, E., Baker, R., Hu, X., Rodrigo, M.M.T., du Boulay, B. (eds.) AIED 2017. LNCS (LNAI), vol. 10331, pp. 323–334. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-61425-0_27
Schaul, T., Quan, J., Antonoglou, I., Silver, D.: Prioritized experience replay. arXiv preprint arXiv:1511.05952 (2015)
Sottilare, R.A., Brawner, K.W., Goldberg, B.S., Holden, H.K.: The generalized intelligent framework for tutoring (GIFT). US Army Research Laboratory–Human Research & Engineering Directorate (ARL-HRED), Orlando (2012)
Spain, R., Rowe, J., Goldberg, B., Pokorny, R., Lester, J.: Enhancing learning outcomes through adaptive remediation with GIFT. In: Proceedings of the Interservice/Industry Training, Simulation and Education Conference. Paper No. 19275 (2019)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, 2nd edn. MIT Press, Cambridge (2018)
Thomas, P., Brunskill, E.: Data-efficient off-policy policy evaluation for reinforcement learning. In: Proceedings of the 33rd International Conference on Machine Learning, pp. 2139–2148 (2016)
Wang, F.: Reinforcement learning in a POMDP based intelligent tutoring system for optimizing teaching strategies. Int. J. Inf. Educ. Technol. 8(8), 553–558 (2018). https://doi.org/10.18178/ijiet.2018.8.8.1098
Wang, P., Rowe, J., Min, W., Mott, B., Lester, J.: High-fidelity simulated players for interactive narrative planning. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp. 3884–3890 (2018). https://doi.org/10.24963/ijcai.2018/540
Wang, P., Rowe, J.P., Min, W., Mott, B.W., Lester, J.C.: Interactive narrative personalization with deep reinforcement learning. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp. 3852–3858 (2017). https://doi.org/10.24963/ijcai.2017/538
Wiggins, B.L., Eddy, S.L., Grunspan, D.Z., Crowe, A.J.: The ICAP active learning framework predicts the learning gains observed in intensely active classroom experiences. AERA Open. 3(2), 1–14 (2017). https://doi.org/10.1177/2332858417708567
Zhou, G., Yang, X., Azizsoltani, H., Barnes, T., Chi, M.: Improving student-system interaction through data-driven explanations of hierarchical reinforcement learning induced pedagogical policies. In: Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization, pp. 284–292. ACM, New York (2020). https://doi.org/10.1145/3340631.3394848
Acknowledgements
The research described herein has been sponsored by the U.S. Army Research Laboratory under cooperative agreement W911NF-15–2-0030. The statements and opinions expressed in this article do not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Fahid, F.M., Rowe, J.P., Spain, R.D., Goldberg, B.S., Pokorny, R., Lester, J. (2021). Adaptively Scaffolding Cognitive Engagement with Batch Constrained Deep Q-Networks. In: Roll, I., McNamara, D., Sosnovsky, S., Luckin, R., Dimitrova, V. (eds) Artificial Intelligence in Education. AIED 2021. Lecture Notes in Computer Science(), vol 12748. Springer, Cham. https://doi.org/10.1007/978-3-030-78292-4_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-78292-4_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-78291-7
Online ISBN: 978-3-030-78292-4
eBook Packages: Computer ScienceComputer Science (R0)