Skip to main content
Log in

DWE-IL: a new incremental learning algorithm for non-stationary time series prediction via dynamically weighting ensemble learning

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

In this work, an Incremental Learning Algorithm via Dynamically Weighting Ensemble Learning (DWE-IL) is proposed to solve the problem of Non-Stationary Time Series Prediction (NS-TSP). The basic principle of DWE-IL is to track real-time data changes by dynamically establishing and maintaining a knowledge base composed of multiple basic models. It trains the base model for each non-stationary time series subset, and finally combine each base model with dynamically weighting rules. The emphasis of the DWE-IL algorithm lies in the update of data weights and base model weights and the training of the base model. Finally, the experimental results of the DWE-IL algorithm on six non-stationary time series datasets are presented and compared with those of several other excellent algorithms. It can be concluded from the experimental results that the DWE-IL algorithm provides a good solution to the challenges of the NS-TSP tasks and has significantly superior performance over other comparative algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Box GE, Jenkins GM, Reinsel GC, Ljung GM (2016) Time series analysis: forecasting and control. J Time Ser Anal 37:709–711

    MATH  Google Scholar 

  2. Garcia R et al (2005) A GARCH forecasting model to predict day-ahead electricity prices. IEEE Trans Power Syst 20(2):867–874

    Article  Google Scholar 

  3. Cao J, Li Z, Li J (2019) Financial time series forecasting model based on CEEMDAN and LSTM. Physica A: Statistic Mech Appl 519:127–139

    Article  Google Scholar 

  4. Silva Jr. CAS et al (2020) Forecasting in Non-stationary Environments with Fuzzy Time Series, Appl Soft Comput, 97

  5. Cao L, Gu Q (2002) Dynamic support vector machines for non-stationary time series forecasting. Intell Data Anal 6(1):67–83

    Article  MATH  Google Scholar 

  6. Gu B et al (2015) Incremental learning for ν -Support Vector Regression. Neural Netw 67:140–150

    Article  MATH  Google Scholar 

  7. Webb GI, Zheng Z (2004) Multistrategy ensemble learning: reducing error by combining ensemble learning techniques. IEEE Trans Knowl Data Eng 16(8):980–991

    Article  Google Scholar 

  8. Van Heeswijk M, Miche Y, Lindh-Knuutila T, Hilbers PA, Honkela T, Oja E, Lendasse A (2009) Adaptive ensemble models of extreme learning machines for time series prediction, in Proceedings of the 19th International Conference on Artifical Neural Networks, 305–314

  9. Chacón HD, Kesici E, Najafirad P (2020) Improving financial time series prediction accuracy using ensemble empirical mode decomposition and recurrent neural networks. IEEE Access 8:117133–117145

    Article  Google Scholar 

  10. Yan B, Aasma M (2020) A novel deep learning framework: prediction and analysis of financial time series using CEEMD and LSTM. Expert Syst Appl 159:113609

    Article  Google Scholar 

  11. Cavalcante RC, Oliveira ALI (2015) An approach to handle concept drift in financial time series based on Extreme Learning Machines and explicit Drift Detection, in 2015 International Joint Conference on Neural Networks (IJCNN), 1–8

  12. Makili L, Vega J, Dormido-Canto S (2013) Incremental support vector machines for fast reliable image recognition. Fusion Eng Design 88(6–8):1170–1173

    Article  Google Scholar 

  13. Fu J, Lee S (2012) A multi-class SVM classification system based on learning methods from indistinguishable chinese official documents. Expert Syst Appl 39(3):3127–3134

    Article  Google Scholar 

  14. Yi Y, Wu J, Xu W (2011) Incremental SVM based on reserved set for network intrusion detection. Expert Syst Appl 38(6):7698–7707

    Article  Google Scholar 

  15. Chitrakar R, Huang C (2014) Selection of candidate support vectors in incremental SVM for network intrusion detection. Comput Secur 45:231–241

    Article  Google Scholar 

  16. Giraud-Carrier C (2000) A note on the utility of incremental learning. Ai Commun 13(4):215–223

    MATH  Google Scholar 

  17. Drucker H (1997) Improving regressors using boosting techniques, in Proceedings of Fourteenth International Conference on Machine Learning (ICML)

  18. Zhang C-X, Zhang J-S, Ji N-N, Guo G (2014) Learning ensemble classifiers via restricted Boltzmann machines. Pattern Recogn Lett 36:161–170

    Article  Google Scholar 

  19. De-la-Torre M, Granger E, Sabourin R, Gorodnichy DO (2015) Adaptive skew-sensitive ensembles for face recognition in video surveillance. Pattern Recogn 48(11):3385–3406

    Article  Google Scholar 

  20. Dai K, Zhao J, Cao F (2015) A novel decorrelated neural network ensemble algorithm for face recognition. Knowl-Based Syst 89:541–552

    Article  Google Scholar 

  21. Williams TP, Gong J (2014) Predicting construction cost overruns using text mining, numerical data and ensemble classifiers. Autom Constr 43:23–29

    Article  Google Scholar 

  22. Zhang Y, Zhang L, Neoh SC, Mistry K, Hossain MA (2015) Intelligent affect regression for bodily expressions using hybrid particle swarm optimization and adaptive ensembles. Expert Syst Appl 42(22):8678–8697

    Article  Google Scholar 

  23. Blum A (1997) Empirical support for winnow and weighted-majority algorithms: results on a calendar scheduling domain. Mach Learn 26(1):5–23

    Article  Google Scholar 

  24. Oza NC, Russell S (2000) Online ensemble learning, in Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence, pp. 1109

  25. Nishida K, Yamauchi K, Omori T (2005) ACE: adaptive classifiers-ensemble system for concept-drifting environments. Lect Notes Comput Sci 3541:176–185

    Article  Google Scholar 

  26. Street WN, Kim Y (2001) A streaming ensemble algorithm (SEA) for large-scale classification," in Proceedings of the Seventh ACM SIGKDD Internaional Conference on Knowledge Discovery and Data Mining, 377–382

  27. Chen Y, Song S, Li S, Yang L, Wu C (2018) Domain space transfer extreme learning machine for domain adaptation. IEEE Trans Cybern 49(5):1909–1922

    Article  Google Scholar 

  28. Huang G-B, Zhou H, Ding X, Zhang R (2011) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern, Part B (Cybern) 42(2):513–529

    Article  Google Scholar 

  29. Belciug S, Gorunescu F (Jul, 2018) Learning a single-hidden layer feedforward neural network using a rank correlation-based strategy with application to high dimensional gene expression and proteomic spectra datasets in cancer detection. J Biomed Inform 83:159–166

    Article  Google Scholar 

  30. Huang GB, Zhu QY, Siew CK (Dec, 2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501

    Article  Google Scholar 

  31. Grigorievskiy A, Miche Y, Ventela AM, Severin E, Lendasse A (Mar, 2014) Long-term time series prediction using OP-ELM. Neural Netw 51:50–56

    Article  MATH  Google Scholar 

  32. Feng GR, Huang GB, Lin QP, Gay R (Aug, 2009) Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans Neural Netw 20(8):1352–1357

    Article  Google Scholar 

  33. Yahoo Finance[EB/OL]. Available: https://finance.yahoo.com/

  34. Chandra R, Zhang MJ (2012) Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction. Neurocomput 86:116–123

    Article  Google Scholar 

  35. Phillips PCB, Ouliaris S (1990) Asymptotic properties of residual based tests for cointegration. Econometrica 58(1):165–193

    Article  MathSciNet  MATH  Google Scholar 

  36. Minowa Y (Oct, 2008) Verification for generalizability and accuracy of a thinning-trees selection model with the ensemble learning algorithm and the cross-validation method. J For Res 13(5):275–285

    Article  Google Scholar 

  37. Liang N-Y, Huang G-B, Saratchandran P, Sundararajan N (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17(6):1411–1423

    Article  Google Scholar 

  38. Xue J, Liu ZS, Gong Y, Pan ZS (2016) "time series prediction based on online sequential improved error minimized extreme learning machine," in Proceedings of ELM-2015 Volume 1: Theory. Algorithms and Applications 6:193–209

    Google Scholar 

  39. Li J, Dai Q, Ye R (2018) A novel double incremental learning algorithm for time series prediction. Neural Comput & Applic 31(2):6055–6077

    Google Scholar 

  40. Yan J, Mu L, Wang L, Ranjan R, Zomaya AY (2020) Temporal convolutional networks for the advance prediction of enSo. Sci Rep 10(1):1–15

    Article  Google Scholar 

  41. Zhang W, Xu A, Ping D, Gao M (2019) An improved kernel-based incremental extreme learning machine with fixed budget for nonstationary time series prediction. Neural Comput & Applic 31(3):637–652

    Article  Google Scholar 

  42. Zhou T, Gao S, Wang J, Chu C, Todo Y, Tang Z (2016) Financial time series prediction using a dendritic neuron model. Knowl-Based Syst 105:214–224

    Article  Google Scholar 

  43. Zhu G, Dai Q (2021) EnsP KDE &IncL KDE: a hybrid time series prediction algorithm. Integrating dynamic ensemble pruning, incremental learning, and kernel density estimation. Appl Intell 51(2):617–645

    Article  Google Scholar 

  44. Yang Y, Che J, Li Y, Zhao Y, Zhu S (2016) An incremental electric load forecasting model based on support vector regression. Energy 113:796–808

    Article  Google Scholar 

  45. Vairappan C, Tamura H, Gao S, Tang Z (2009) Batch type local search-based adaptive neuro-fuzzy inference system (ANFIS) with self-feedbacks for time-series prediction. Neurocomputing 72(7–9):1870–1877

    Article  Google Scholar 

  46. Chandra R, Chand S (2016) Evaluation of co-evolutionary neural network architectures for time series prediction with mobile application in finance. Appl Soft Comput 49:462–473

    Article  Google Scholar 

  47. Laskov P, Gehl C, Kruger S, Muller KR (Sep, 2006) Incremental support vector learning: analysis, implementation and applications. J Mach Learn Res 7:1909–1936

    MathSciNet  MATH  Google Scholar 

  48. Chen Y et al (2006) Probabilistic forecasting with temporal convolutional neural network. Neurocomput 399(25):491–501

    Google Scholar 

Download references

Acknowledgments

This work is supported by the National Key R&D Program of China (Grant Nos. 2018YFC2001600, 2018YFC2001602), and the National Natural Science Foundation of China under Grant no. 61473150.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qun Dai.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yu, H., Dai, Q. DWE-IL: a new incremental learning algorithm for non-stationary time series prediction via dynamically weighting ensemble learning. Appl Intell 52, 174–194 (2022). https://doi.org/10.1007/s10489-021-02385-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-021-02385-4

Keywords

Navigation