Skip to main content

An Academic Achievement Prediction Model Enhanced by Stacking Network

  • Conference paper
  • First Online:
Digital TV and Wireless Multimedia Communication (IFTC 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1181))

Abstract

This article focuses on the use of data mining and machine learning in AI education to achieve better prediction accuracy of students’ academic achievement. So far, there are already many well-built gradient boosting machines for small data sets prediction, such as lightGBM, XGBoost, etc. Based on this, we presented and experimented a new method in a regression prediction. Our Stacking Network combines the traditional ensemble models with the idea of deep neural network. Compared with the original Stacking method, Stacking Network can infinitely increase the number of layers, making the effect of Stacking Network much higher than that of traditional Stacking. Simultaneously, compared with deep neural network, this Stacking Network inherits the advantages of the Boosting machines. We have applied this approach to achieve higher accuracy and better speed than the conventional Deep neural network. And also, we achieved a highest rank on the Middle School Grade Dataset provided by Shanghai Telecom Corporation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ke, G., Meng, Q., Finley, T., et al.: Lightgbm: a highly efficient gradient boosting decision tree. In: Advances in Neural Information Processing Systems, pp. 3146–3154 (2017)

    Google Scholar 

  2. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794. ACM (2016)

    Google Scholar 

  3. Lemley, M.A., Shapiro, C.: Patent holdup and royalty stacking. Tex. L. Rev. 2006, 85 (1991)

    Google Scholar 

  4. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MATH  Google Scholar 

  5. Fauconnier, G., Turner, M.: The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. Basic Books, New York (2008)

    Google Scholar 

  6. Rowley, H.A., Baluja, S., Kanade, T.: Neural network-based face detection. IEEE Trans. Pattern Anal. Mach. Intell. 20(1), 23–38 (1998)

    Article  Google Scholar 

  7. Specht, D.F.: A general regression neural network. IEEE Trans. Neural Netw. 2(6), 568–576 (1991)

    Article  Google Scholar 

  8. Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. In: Advances in Neural Information Processing Systems, pp. 231–238 (1995)

    Google Scholar 

  9. Li, J., Chang, H., Yang, J.: Sparse deep stacking network for image classification. In: Twenty-Ninth AAAI Conference on Artificial Intelligence (2015)

    Google Scholar 

  10. Prokhorenkova, L., Gusev, G., Vorobev, A., et al.: CatBoost: unbiased boosting with categorical features. In: Advances in Neural Information Processing Systems, pp. 6638–6648 (2018)

    Google Scholar 

  11. Odom, M.D., Sharda, R.: A neural network model for bankruptcy prediction. In: 1990 IJCNN International Joint Conference on Neural Networks, pp. 163–168. IEEE (1990)

    Google Scholar 

  12. Rose, S.: Mortality risk score prediction in an elderly population using machine learning. Am. J. Epidemiol. 177(5), 443–452 (2013)

    Article  Google Scholar 

  13. Grady, J., Oakley, T., Coulson, S.: Blending and metaphor. Amst. Stud. Theory Hist. Linguist. Sci. Ser. 4, 101–124 (1999)

    Google Scholar 

  14. Freund, Y., Iyer, R., Schapire, R.E., et al.: An efficient boosting algorithm for combining preferences. J. Mach. Learn. Res. 4(Nov), 933–969 (2003)

    MathSciNet  MATH  Google Scholar 

  15. Schapire, R.E.: A brief introduction to boosting. In: IJCAI, vol. 99, pp. 1401–1406 (1999)

    Google Scholar 

  16. Solomatine, D.P., Shrestha, D.L.: AdaBoost. RT: a boosting algorithm for regression problems. In: 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No. 04CH37541), vol. 2, pp. 1163–1168. IEEE (2004)

    Google Scholar 

  17. Kudo, T., Matsumoto, Y.: A boosting algorithm for classification of semi-structured text. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 301–308 (2004)

    Google Scholar 

  18. Yosinski, J., Clune, J., Bengio, Y., et al.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)

    Google Scholar 

  19. Esteva, A., Kuprel, B., Novoa, R.A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115 (2017)

    Article  Google Scholar 

  20. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)

    Google Scholar 

  21. Hecht-Nielsen, R.: Theory of the backpropagation neural network. In: Neural Networks for Perception, pp. 65–93. Academic Press (1992)

    Google Scholar 

  22. Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: Proceedings of ICML, vol. 30, no. 1, p. 3 (2013)

    Google Scholar 

  23. Psaltis, D., Sideris, A., Yamamura, A.A.: A multilayered neural network controller. IEEE Control Syst. Mag. 8(2), 17–21 (1988)

    Article  Google Scholar 

  24. Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 (2014)

  25. Saposnik, G., Cote, R., Mamdani, M., et al.: JURaSSiC: accuracy of clinician vs risk score prediction of ischemic stroke outcomes. Neurology 81(5), 448–455 (2013)

    Article  Google Scholar 

  26. Holland, P.W., Hoskens, M.: Classical test theory as a first-order item response theory: application to true-score prediction from a possibly nonparallel test. Psychometrika 68(1), 123–149 (2003)

    Article  MathSciNet  Google Scholar 

  27. Liu, Y., An, A., Huang, X.: Boosting prediction accuracy on imbalanced datasets with SVM ensembles. In: Ng, W.-K., Kitsuregawa, M., Li, J., Chang, K. (eds.) PAKDD 2006. LNCS (LNAI), vol. 3918, pp. 107–118. Springer, Heidelberg (2006). https://doi.org/10.1007/11731139_15

    Chapter  Google Scholar 

  28. Chawla, N.V., Lazarevic, A., Hall, L.O., Bowyer, K.W.: SMOTEBoost: improving prediction of the minority class in boosting. In: Lavrač, N., Gamberger, D., Todorovski, L., Blockeel, H. (eds.) PKDD 2003. LNCS (LNAI), vol. 2838, pp. 107–119. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-39804-2_12

    Chapter  Google Scholar 

  29. Bühlmann, P., Hothorn, T.: Boosting algorithms: regularization, prediction and model fitting. Stat. Sci. 22(4), 477–505 (2007)

    Article  MathSciNet  Google Scholar 

  30. Bagnell, J.A., Chestnutt, J., Bradley, D.M., et al.: Boosting structured prediction for imitation learning. In: Advances in Neural Information Processing Systems, pp. 1153–1160 (2007)

    Google Scholar 

  31. Du, X., Sun, S., Hu, C., et al.: DeepPPI: boosting prediction of protein-protein interactions with deep neural networks. J. Chem. Inf. Model. 57(6), 1499–1510 (2017)

    Article  Google Scholar 

  32. Lu, N., Lin, H., Lu, J., et al.: A customer churn prediction model in telecom industry using boosting. IEEE Trans. Industr. Inf. 10(2), 1659–1665 (2012)

    Article  Google Scholar 

  33. Bühlmann, P., Hothorn, T.: Twin boosting: improved feature selection and prediction. Stat. Comput. 20(2), 119–138 (2010)

    Article  MathSciNet  Google Scholar 

  34. Friedman, J.H.: Stochastic gradient boosting. Comput. Stat. Data Anal. 38(4), 367–378 (2002)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shaofeng Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, S., Liu, M., Zhang, J. (2020). An Academic Achievement Prediction Model Enhanced by Stacking Network. In: Zhai, G., Zhou, J., Yang, H., An, P., Yang, X. (eds) Digital TV and Wireless Multimedia Communication. IFTC 2019. Communications in Computer and Information Science, vol 1181. Springer, Singapore. https://doi.org/10.1007/978-981-15-3341-9_20

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-3341-9_20

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-3340-2

  • Online ISBN: 978-981-15-3341-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics