Skip to main content
Log in

Adaptive, Intelligent, and Personalized: Navigating the Terminological Maze Behind Educational Technology

  • Commentary
  • Published:
International Journal of Artificial Intelligence in Education Aims and scope Submit manuscript

Abstract

Educational technology terminology is messy. The same meaning is often expressed using several terms. More confusingly, some terms are used with several meanings. This state is unfortunate, as it makes both research and development more difficult. Terminology is particularly important in the case of personalization techniques, where the nuances of meaning are often crucial. We discuss the current state of the used terminology, highlighting specific cases of potential confusion. In the near future, any significant unification of terminology does not seem feasible. A realistic and still very useful step forward is to make the terminology used in individual research papers more explicit.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. For example, according to Google Scholar search statistics, student modeling is twice more common phrase than learner modeling.

  2. One example of an explicit definition is by Sottilare et al. (2016): “The domain model contains the set of skills, knowledge, and strategies/tactics of the topic being tutored. It normally contains the ideal expert knowledge and also the bugs, mal-rules, and misconceptions that students periodically exhibit.”

  3. Wikipedia page for domain model has the definition “In ontology engineering, a domain model is a formal representation of a knowledge domain with concepts, roles, datatypes, individuals, and rules, typically grounded in a description logic.”

References

  • Aleven, V. (2010). Rule-based cognitive modeling for intelligent tutoring systems. In Advances in intelligent tutoring systems (pp. 33–62): Springer.

  • Aleven, V., McLaughlin, E.A., Glenn, R.A., & Koedinger, K.R. (2016). Instruction based on adaptive learning technologies. Handbook of research on learning and instruction, 522–560.

  • Baker, R., Walonoski, J., Heffernan, N., Roll, I., Corbett, A., & Koedinger, K. (2008). Why students engage in “gaming the system” behavior in interactive learning environments. Journal of Interactive Learning Research, 19(2), 185–224.

    Google Scholar 

  • Baker, R.S. (2007). Modeling and understanding students’ off-task behavior in intelligent tutoring systems. In Proceedings of SIGCHI conference on Human factors in computing systems (pp. 1059–1068).

  • Barnes, T. (2005). The q-matrix method: Mining student response data for knowledge. In American association for artificial intelligence 2005 educational data mining workshop (pp. 1–8).

  • Beck, J.E., & Gong, Y. (2013). Wheel-spinning: Students who fail to master a skill. In Proceedings of artificial intelligence in education (pp. 431–440): Springer.

  • Beckmann, J.F., Birney, D.P., & Goode, N. (2017). Beyond psychometrics: the difference between difficult problem solving and complex problem solving. Frontiers in Psychology, 8, 1739.

    Article  Google Scholar 

  • Bjork, E.L., Bjork, R.A., & et al. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. Psychology and the real world: Essays illustrating fundamental contributions to society, 2,(59–68).

  • Brusilovsky, P., & Pesin, L. (1998). Adaptive navigation support in educational hypermedia: an evaluation of the isis-tutor. Journal of computing and Information Technology, 6(1), 27–38.

    Google Scholar 

  • Cepeda, N.J., Vul, E., Rohrer, D., Wixted, J.T., & Pashler, H. (2008). Spacing effects in learning: a temporal ridgeline of optimal retention. Psychological Science, 19(11), 1095–1102.

    Article  Google Scholar 

  • Chrysafiadi, K., & Virvou, M. (2013). Student modeling approaches: a literature review for the last decade. Expert Systems with Applications, 40(11), 4715–4729.

    Article  Google Scholar 

  • Churchill, D. (2007). Towards a useful classification of learning objects. Educational Technology Research and Development, 55(5), 479–497.

    Article  Google Scholar 

  • Cook, D.A., & Beckman, T.J. (2006). Current concepts in validity and reliability for psychometric instruments: theory and application. The American Journal of Medicine, 119(2), 166–e7.

    Article  Google Scholar 

  • Cumming, G., Fidler, F., & Vaux, D.L. (2007). Error bars in experimental biology. The Journal of Cell Biology, 177(1), 7–11.

    Article  Google Scholar 

  • De Ayala, R. (2008). The theory and practice of item response theory. New York: The Guilford Press.

    Google Scholar 

  • van De Sande, B. (2013). Properties of the bayesian knowledge tracing model. Journal of Educational Data Mining, 5(2), 1–10.

    Google Scholar 

  • Doble, C., Matayoshi, J., Cosyn, E., Uzun, H., & Karami, A. (2019). A data-based simulation study of reliability for an adaptive assessment based on knowledge space theory. International Journal of Artificial Intelligence in Education, 29(2), 258–282.

    Article  Google Scholar 

  • Doignon, J.P., & Falmagne, J.C. (2016). Knowledge spaces and learning spaces. New handbook of mathematical psychology, 274–321.

  • Essa, A. (2016). A possible future for next generation adaptive learning systems. Smart Learning Environments, 3(1), 16.

    Article  Google Scholar 

  • Evans, E. (2004). Domain-driven design: tackling complexity in the heart of software. Boston: Addison-Wesley Professional.

    Google Scholar 

  • Falmagne, J.C., Albert, D., Doble, C., Eppstein, D., & Hu, X. (2013). Knowledge spaces: Applications in education. Berlin: Springer Science & Business Media.

    Book  MATH  Google Scholar 

  • Fritz, C.O., Morris, P.E., & Richler, J.J. (2012). Effect size estimates: current use, calculations, and interpretation. Journal of Experimental Psychology: General, 141(1), 2.

    Article  Google Scholar 

  • Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.

    Article  Google Scholar 

  • Hunicke, R. (2005). The case for dynamic difficulty adjustment in games. In Proceedings of Advances in Computer Entertainment Technology (pp. 429–433).

  • Hylén, J. (2006). Open educational resources: Opportunities and challenges. Proceedings of Open education, 4963.

  • Iglesias, A., Martínez, P., Aler, R., & Fernández, F. (2009). Reinforcement learning of pedagogical policies in adaptive and intelligent educational systems. Knowledge-Based Systems, 22(4), 266– 270.

    Article  Google Scholar 

  • John, R.J.L., McTavish, T.S., & Passonneau, R.J. (2015). Semantic graphs for mathematics word problems based on mathematics terminology. In EDM (Workshops).

  • Jumaat, N.F., & Tasir, Z. (2014). Instructional scaffolding in online learning environment: a meta-analysis. In 2014 international conference on teaching and learning in computing and engineering (pp. 74–77): IEEE.

  • Kang, S.H. (2016). Spaced repetition promotes efficient and effective learning: Policy implications for instruction. Policy Insights from the Behavioral and Brain Sciences, 3(1), 12–19.

    Article  Google Scholar 

  • Käser, T, Klingler, S., & Gross, M. (2016). When to stop? towards universal instructional policies. In Proceedings of Learning Analytics & Knowledge (pp. 289–298).

  • Klinkenberg, S., Straatemeier, M., & van der Maas, H.L. (2011). Computer adaptive practice of maths ability using a new item response model for on the fly ability and difficulty estimation. Computers & Education, 57(2), 1813–1824.

    Article  Google Scholar 

  • Koedinger, K.R., Corbett, A.T., & Perfetti, C. (2012). The knowledge-learning-instruction framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive Science, 36(5), 757–798.

    Article  Google Scholar 

  • Kornell, N. (2009). Optimising learning using flashcards: Spacing is more effective than cramming. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition, 23(9), 1297–1317.

    Article  Google Scholar 

  • Liu, P., & Li, Z. (2012). Task complexity: a review and conceptualization framework. International Journal of Industrial Ergonomics, 42(6), 553–568.

    Article  Google Scholar 

  • Liu, R., & Koedinger, K.R. (2017). Closing the loop: Automated data-driven cognitive model discoveries lead to improved instruction and learning gains. Journal of Educational Data Mining, 9(1), 25–41.

    Google Scholar 

  • Lomas, D., Patel, K., Forlizzi, J.L., & Koedinger, K.R. (2013). Optimizing challenge in an educational game using large-scale design experiments. In Proceedings SIGCHI Conference on human factors in computing systems (pp. 89–98).

  • Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H., & Koper, R. (2011). Recommender systems in technology enhanced learning. In Recommender systems handbook (pp. 387–415). Springer.

  • Martin, B., & Mitrovic, A. (2003). Its domain modelling: art or science. In Proceedings of artificial intelligence in education (pp. 183–190).

  • McShane, B.B., Gal, D., Gelman, A., Robert, C., & Tackett, J.L. (2019). Abandon statistical significance. The American Statistician, 73(sup1), 235–245.

    Article  MathSciNet  Google Scholar 

  • Mitrovic, A. (2010). Modeling domains and students with constraint-based modeling. In Advances in intelligent tutoring systems (pp. 63–80): Springer.

  • Nakamura, J., & Csikszentmihalyi, M. (2014). The concept of flow. In Flow and the foundations of positive psychology (pp. 239–263): Springer.

  • Pavlik, P.I., & Anderson, J.R. (2008). Using a model to compute the optimal schedule of practice. Journal of Experimental Psychology: Applied, 14 (2), 101.

    Google Scholar 

  • Pavlik, Jr, P.I., & Anderson, J.R. (2005). Practice and forgetting effects on vocabulary memory: an activation-based model of the spacing effect. Cognitive Science, 29(4), 559–586.

    Article  Google Scholar 

  • Pelánek, R. (2016). Applications of the elo rating system in adaptive educational systems. Computers & Education, 98, 169–179.

    Article  Google Scholar 

  • Pelánek, R. (2017). Bayesian knowledge tracing, logistic models, and beyond: an overview of learner modeling techniques. User Modeling and User-Adapted Interaction, 27(3), 313–350.

    Article  Google Scholar 

  • Pelánek, R. (2018a). Conceptual issues in mastery criteria: Differentiating uncertainty and degrees of knowledge. In Proceedings of artificial intelligence in education (pp. 450–461): Springer.

  • Pelánek, R. (2018b). The details matter: methodological nuances in the evaluation of student models. User Modeling and User-Adapted Interaction, 28, 207–235.

    Article  Google Scholar 

  • Pelánek, R. (2020a). A classification framework for practice exercises in adaptive learning systems. IEEE Transactions on Learning Technologies.

  • Pelánek, R. (2020b). Managing items and knowledge components: domain modeling in practice. Educational Technology Research and Development, 68(1), 529–550.

    Article  Google Scholar 

  • Pelánek, R, & Řihák, J. (2018). Analysis and design of mastery learning criteria. New Review of Hypermedia and Multimedia, 24(3), 133–159.

    Article  Google Scholar 

  • Pelánek, R., Papoušek, J., Řihák, J., Stanislav, V., & Nižnan, J. (2017). Elo-based learner modeling for the adaptive practice of facts. User Modeling and User-Adapted Interaction, 27(1), 89–118.

    Article  Google Scholar 

  • Reddy, S., Labutov, I., Banerjee, S., & Joachims, T. (2016). Unbounded human learning: Optimal scheduling for spaced repetition. In Proceedings of knowledge discovery and data mining (pp. 1815–1824).

  • Ripley, B.D. (2007). Pattern recognition and neural networks. Cambridge: Cambridge University Press.

    MATH  Google Scholar 

  • Roediger, H.L. III, & Butler, A.C. (2011). The critical role of retrieval practice in long-term retention. Trends in Cognitive Sciences, 15(1), 20–27.

    Article  Google Scholar 

  • Rohrer, D. (2015). Student instruction should be distributed over long time periods. Educational Psychology Review, 27(4), 635–643.

    Article  MathSciNet  Google Scholar 

  • Roll, I., RSd, Baker, Aleven, V., & Koedinger, K.R. (2014). On the benefits of seeking (and avoiding) help in online problem-solving environments. Journal of the Learning Sciences, 23(4), 537–560.

    Article  Google Scholar 

  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215.

    Article  Google Scholar 

  • Sarkar, A., & Cooper, S. (2019). Transforming game difficulty curves using function composition. In Proceedings of CHI conference on human factors in computing systems (pp. 1–7).

  • Settles, B., & Meeder, B. (2016). A trainable spaced repetition model for language learning. In Proceedings of Annual meeting of the association for computational linguistics (volume 1: long papers) (pp. 1848–1858).

  • Sottilare, R.A., Graesser, A., Hu, X., & Holden, H. (2013). Design recommendations for intelligent tutoring systems: Volume 1-learner modeling, vol 1. US Army research laboratory.

  • Sottilare, R.A., Graesser, A.C., Hu, X., Olney, A., Nye, B., & Sinatra, A.M. (2016). Design recommendations for intelligent tutoring systems: Volume 4-domain modeling, vol 4. US Army research laboratory.

  • Tatsuoka, K.K. (1983). Rule space: an approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20(4), 345–354.

    Article  Google Scholar 

  • Taylor, K., & Rohrer, D. (2010). The effects of interleaved practice. Applied Cognitive Psychology, 24(6), 837–848.

    Article  Google Scholar 

  • Vanlehn, K. (2006). The behavior of tutoring systems. International Journal of Artificial Intelligence in Education, 16(3), 227–265.

    Google Scholar 

  • VanLehn, K. (2016). Regulative loops, step loops and task loops. International Journal of Artificial Intelligence in Education, 26(1), 107–112.

    Article  Google Scholar 

  • Wang, H.C., Li, T.Y., Chang, C.Y., & et al. (2005). A user modeling framework for exploring creative problem-solving ability. In Proceedings of artificial intelligence in education (pp. 941–943).

Download references

Acknowledgements

The author thanks Ryan Baker, Michael Yudelson, Ken Koedinger, Tomáš Effenberger, Jaroslav Čechák, and Valdemar Švábenský for their comments, ideas, and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Radek Pelánek.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pelánek, R. Adaptive, Intelligent, and Personalized: Navigating the Terminological Maze Behind Educational Technology. Int J Artif Intell Educ 32, 151–173 (2022). https://doi.org/10.1007/s40593-021-00251-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40593-021-00251-5

Keywords

Navigation