Skip to main content

Using Recurrent Neural Networks to Build a Stopping Algorithm for an Adaptive Assessment

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11626))

Abstract

ALEKS (“Assessment and LEarning in Knowledge Spaces”) is an adaptive learning and assessment system based on knowledge space theory. In this work, our goal is to improve the overall efficiency of the ALEKS assessment by developing an algorithm that can accurately predict when the assessment should be stopped. Using data from more than 1.4 million assessments, we first build recurrent neural network classifiers that attempt to predict the final result of each assessment. We then use these classifiers to develop our stopping algorithm, with the test results indicating that the length of the assessment can potentially be reduced by a large amount while maintaining a high level of accuracy.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Students actually answer up to 30 questions when accounting for a randomly chosen question that is used for validation and other statistics. This number of questions balances the need to gather enough information about the student’s knowledge state against the possibility of overwhelming the student. Regarding the latter concern, see [16] for evidence of a “fatigue effect” experienced by students in ALEKS assessments.

References

  1. Botelho, A.F., Baker, R.S., Heffernan, N.T.: Improving sensor-free affect detection using deep learning. In: André, E., Baker, R., Hu, X., Rodrigo, M.M.T., du Boulay, B. (eds.) AIED 2017. LNCS (LNAI), vol. 10331, pp. 40–51. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-61425-0_4

    Chapter  Google Scholar 

  2. Cho, K., van Merrienboer, B., Gülçehre, Ç., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR abs/1406.1078 (2014). http://arxiv.org/abs/1406.1078

  3. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014)

  4. Doble, C., Matayoshi, J., Cosyn, E., Uzun, H., Karami, A.: A data-based simulation study of reliability for an adaptive assessment based on knowledge space theory. Int. J. Artif. Intell. Educ. (2019). https://doi.org/10.1007/s40593-019-00176-0

  5. Doignon, J.P., Falmagne, J.C.: Spaces for the assessment of knowledge. Int. J. Man-Mach. Stud. 23, 175–196 (1985)

    Article  Google Scholar 

  6. Falmagne, J.C., Albert, D., Doble, C., Eppstein, D., Hu, X. (eds.): Knowledge Spaces: Applications in Education. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-35329-1

    Book  MATH  Google Scholar 

  7. Falmagne, J.C., Doignon, J.P.: Learning Spaces. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-01039-2

    Book  MATH  Google Scholar 

  8. Gal, Y., Ghahramani, Z.: A theoretically grounded application of dropout in recurrent neural networks. In: Advances in Neural Information Processing Systems, vol. 29 (2016)

    Google Scholar 

  9. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997)

    Article  Google Scholar 

  10. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456 (2015)

    Google Scholar 

  11. Jiang, W., Pardos, Z.A., Wei, Q.: Goal-based course recommendation. In: Proceedings of the 9th International Conference on Learning Analytics & Knowledge, pp. 36–45 (2019)

    Google Scholar 

  12. Jiang, Y., et al.: Expert feature-engineering vs. deep neural networks: which is better for sensor-free affect detection? In: Penstein Rosé, C., et al. (eds.) AIED 2018. LNCS (LNAI), vol. 10947, pp. 198–211. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93843-1_15

    Chapter  Google Scholar 

  13. Khajah, M., Lindsey, R., Mozer, M.: How deep is knowledge tracing? In: Proceedings of the 9th International Conference on Educational Data Mining, pp. 94–101 (2016)

    Google Scholar 

  14. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)

    Article  Google Scholar 

  15. Lin, C., Chi, M.: A comparisons of BKT, RNN and LSTM for learning gain prediction. In: André, E., Baker, R., Hu, X., Rodrigo, M.M.T., du Boulay, B. (eds.) AIED 2017. LNCS (LNAI), vol. 10331, pp. 536–539. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-61425-0_58

    Chapter  Google Scholar 

  16. Matayoshi, J., Granziol, U., Doble, C., Uzun, H., Cosyn, E.: Forgetting curves and testing effect in an adaptive learning and assessment system. In: Proceedings of the 11th International Conference on Educational Data Mining, pp. 607–612 (2018)

    Google Scholar 

  17. McGraw-Hill Education/ALEKS Corporation: What is ALEKS? https://www.aleks.com/about_aleks

  18. Piech, C., et al.: Deep knowledge tracing. In: Advances in Neural Information Processing Systems, pp. 505–513 (2015)

    Google Scholar 

  19. Prechelt, L.: Early stopping — but when? In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 53–67. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35289-8_5

    Chapter  Google Scholar 

  20. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. Neural Comput. 15, 1929–1968 (2014)

    MathSciNet  MATH  Google Scholar 

  21. Xiong, X., Zhao, S., Vaninwegen, E., Beck, J.: Going deeper with knowledge tracing. In: Proceedings of the 9th International Conference on Educational Data Mining, pp. 545–550 (2016)

    Google Scholar 

  22. Yin, W., Kann, K., Yu, M., Schütze, H.: Comparative study of CNN and RNN for natural language processing. arXiv preprint arXiv:1702.01923 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeffrey Matayoshi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Matayoshi, J., Cosyn, E., Uzun, H. (2019). Using Recurrent Neural Networks to Build a Stopping Algorithm for an Adaptive Assessment. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds) Artificial Intelligence in Education. AIED 2019. Lecture Notes in Computer Science(), vol 11626. Springer, Cham. https://doi.org/10.1007/978-3-030-23207-8_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-23207-8_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-23206-1

  • Online ISBN: 978-3-030-23207-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics