Skip to main content
Log in

Learning performance prediction via convolutional GRU and explainable neural networks in e-learning environments

  • Published:
Computing Aims and scope Submit manuscript

Abstract

Students learning performance prediction is a challenging task due to the dynamic, virtual environments and the personalized needs for different individuals. To ensure that learners’ potential problems can be identified as early as possible, this paper aim to develop a predictive model for effective learning feature extracting, learning performance predicting and result reasoning. We first proposed a general learning feature quantification method to convert the raw data from e-learning systems into sets of independent learning features. Then, a weighted avg-pooling is chosen instead of typical max-pooling in a novel convolutional GRU network for learning performance prediction. Finally, an improved parallel xNN is provided to explain the prediction results. The relevance of positive/negative between features and result could help students find out which part should be improved. Experiments have been carried out over two real online courses data. Results show that our proposed approach performs favorably compared with several other state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. http://www.keenage.com/.

  2. http://www.worlduc.com/.

  3. http://moodle.scnu.edu.cn/.

References

  1. Ballas N, Yao L, Pal C, Courville A (2015) Delving deeper into convolutional networks for learning video representations. ArXiv preprint arXiv:1511.06432

  2. Binh HT, Duy BT (2017) Predicting students’ performance based on learning style by using artificial neural networks. In: Proceedings of the 9th International Conference on Knowledge and Systems Engineering (KSE), IEEE , pp 48–53

  3. Boureau YL, Bach F, Lecun Y, Ponce J (2010) Learning mid-level features for recognition. In: Proceedings of the 27th computer vision and pattern recognition, IEEE, pp 2559–2566

  4. Burgos C, Campanario ML, Peña D, Lara JA, Lizcano D, Martnez MA (2018) Data mining for modeling students performance: a tutoring action plan to prevent academic dropout. Comput Electr Eng 66:541–556

    Article  Google Scholar 

  5. Chung J, Gulcehre C, Cho K, Bengio Y (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. ArXiv preprint arXiv:1412.3555

  6. Fok WW, He Y, Yeung HA, Law K, Cheung K, Ai Y, Ho P (2018) Prediction model for students’ future development by deep learning and tensorflow artificial intelligence engine. In: Proceedings of the 4th international conference on information management (ICIM), IEEE, pp 103–106

  7. Gardner J, Brooks C (2018) Student success prediction in MOOCs. User Model User Adapt Interact 28(2):127–203

    Article  Google Scholar 

  8. He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the international conference on computer vision (ICCV), IEEE, pp 1026–1034

  9. Hermans M, Schrauwen B (2013) Training and analysing deep recurrent neural networks. In: Proceedings of the 26th advances in neural information processing systems (NIPS), pp 190–198

  10. Hilbert M (2016) Big data for development: a review of promises and challenges. Dev Policy Rev 34(1):135–174

    Article  MathSciNet  Google Scholar 

  11. Hinton GE, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov RR (2012) Improving neural networks by preventing co-adaptation of feature detectors. ArXiv preprint arXiv:1207.0580

  12. Holmes M, Latham A, Crockett K, O’Shea JD (2018) Near real-time comprehension classification with artificial neural networks: decoding e-learner non-verbal behavior. IEEE Trans Learn Technol 11(1):5–12

    Article  Google Scholar 

  13. Huang C, Yang S, Pan Y, Lai H (2018) Object-location-aware hashing for multi-label image retrieval via automatic mask learning. IEEE Trans Image Process 27(9):4490–4502

    Article  MathSciNet  MATH  Google Scholar 

  14. Huang C, Xu H, Xie L, Zhu J, Xu C, Tang Y (2018) Large-scale semantic web image retrieval using bimodal deep learning techniques. Inf Sci 430:331–348

    Article  MathSciNet  Google Scholar 

  15. Hughes G, Dobbins C (2015) The utilization of data analysis techniques in predicting student performance in massive open online courses (MOOCs). Res Pract Technol Enhanc Learn 10(10):1–18

    Google Scholar 

  16. Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. ArXiv preprint arXiv:1502.03167

  17. Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. ArXiv preprint arXiv:1412.6980

  18. Lin M, Chen Q, Yan S (2013) Network in network. ArXiv preprint arXiv:1312.4400

  19. Moridis CN, Economides AA (2009) Prediction of students mood during an online test using formula-based and neural network-based method. Comput Educ 53(3):644–652

    Article  Google Scholar 

  20. Nam S, Frishkoff G, Thompson K (2018) Predicting students disengaged behaviors in an online meaning-generation task. IEEE Trans Learn Technol 11(3):362–375

    Article  Google Scholar 

  21. Ognjanovic I, Gasevic D, Dawson S (2016) Using institutional data to predict student course selections in higher education. Internet High Educ 29:49–62

    Article  Google Scholar 

  22. Ramezani M, Jahanshahi M (2017) Load-aware multicast routing in multi-radio wireless mesh networks using FCA-CMAC neural network. Computing 100:473–501

    Article  MathSciNet  MATH  Google Scholar 

  23. Rathore SS, Kumar S (2017) A decision tree logic based recommendation system to select software fault prediction techniques. Computing 99(3):255–285

    Article  MathSciNet  Google Scholar 

  24. Ruan L, Yuan M (2010) Dimension reduction and parameter estimation for additive index models. Stat Interface 3(4):493–499

    Article  MathSciNet  MATH  Google Scholar 

  25. Shahiri AM, Husain W (2015) A review on predicting student’s performance using data mining techniques. Proc Comput Sci 72:414–422

    Article  Google Scholar 

  26. Tang H, Xing W, Pei B (2018) Exploring the temporal dimension of forum participation in MOOCs. Distance Educ 39(3):1–20

    Article  Google Scholar 

  27. Vaughan J, Sudjianto A, Brahimi E, Chen J, Nair VN (2018) Explainable neural networks based on additive index models. ArXiv preprint arXiv:1806.01933

  28. Vitiello M, Walk S, Helic D, Chang V, Guetl C (2018) User behavioral patterns and early dropouts detection: improved users profiling through analysis of successive offering of MOOC. J Univ Comput Sci 24(8):1131–1150

    Google Scholar 

  29. Wang X, Huang C, Zhu J, Xu X (2018) Study on learning condition prediction based on big data analysis in cloud learning space. e-Educ Res 39(10):63–70

    Google Scholar 

  30. Wang X, Jiang W, Luo Z (2016) Combination of convolutional and recurrent neural network for sentiment analysis of short texts. In: Proceedings of the 26th international conference on computational linguistics (COLING 2016), pp 2428–2437

  31. Yang TY, Brinton CG, Wong C, Chiang M (2017) Behavior-based grade prediction for MOOCs via time series neural networks. IEEE J Sel Top Signal Process 11(5):716–728

    Google Scholar 

  32. You JW (2016) Identifying significant indicators using LMS data to predict course achievement in online learning. Internet Higher Educ 29:23–30

    Article  Google Scholar 

  33. Zhang Z, Robinson D, Tepper J (2018) Detecting hate speech on twitter using a convolution-GRU based deep neural network. In: Proceedings of the 15th European semantic web conference (ESWC2018), Springer, pp 745-760

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pengze Wu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported by the National Natural Science Foundation of China (Nos. 61877020 and 61802132), the China Postdoctoral Science Foundation (No. 2018M630959), the S&T Project of Guangdong Province (No. 2016B010109008), and the S&T Project of Guangzhou Municipality, China (No. 201604016019).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, X., Wu, P., Liu, G. et al. Learning performance prediction via convolutional GRU and explainable neural networks in e-learning environments. Computing 101, 587–604 (2019). https://doi.org/10.1007/s00607-018-00699-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00607-018-00699-9

Keywords

Mathematics Subject Classification

Navigation