Skip to main content
Log in

Selected an Stacking ELMs for Time Series Prediction

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Extreme learning machine (ELM) has several interesting and significant features. In this paper, a novel pruned Stacking ELMs (PS-ELMs) algorithm for time series prediction (TSP) is proposed. It employs ELM as the level-0 algorithm to train several models for Stacking. And our previously proposed reduce-error pruning for TSP (ReTSP)-Trend pruning technique is used to solve the problem that the level-0 learners might make many correlated error predictions. ReTSP-Trend refers to an evaluation measure for reduce-error pruning for TSP (ReTSP), which takes into account the time series trend and the forecasting error direction. What’s more, ELM and simple averaging are used to generate the level-1 model. With the development of PS-ELMs, firstly, those essential advantages of ELM will be naturally inherited. Secondly, those specific defects of ELM are ameliorated to some extent, with the help of ensemble pruning paradigm. Thirdly, ensemble pruning is employed to raise the robustness and accuracy of time series forecasting, making up for the shortages of the existing research. Fourthly, our previously proposed pruning measure ReTSP-Trend is employed in PS-ELMs, which indeed guarantees that the remaining predictor which supplements the subensemble the most will be selected. And finally, the development of PS-ELMs will promote our investigation to the popular ensemble technique of Stacked Generalization. The experimental results on four benchmark financial time series datasets verified the validity of the proposed PS-ELMs algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Neto A, Calvalcanti GD, Ren TI (2009) Financial time series prediction using exogenous series and combined neural networks. In: International joint conference on neural networks, 2009. IJCNN 2009, pp 149–156

  2. Abu-Mostafa YS, Atiya AF (1996) Introduction to financial forecasting. Appl Intell 6:205–213

    Article  Google Scholar 

  3. Jiang H, He W (2012) Grey relational grade in local support vector regression for financial time series prediction. Expert Syst Appl 39:2256–2262

    Article  Google Scholar 

  4. Crone SF, Hibon M, Nikolopoulos K (2011) Advances in forecasting with neural networks? Empirical evidence from the NN3 competition on time series prediction. Int J Forecast 27:635–660

    Article  Google Scholar 

  5. White H (1988) Economic prediction using neural networks: the case of IBM daily stock returns. In: IEEE international conference on neural networks, 1988, pp 451–458

  6. Zhiqiang G, Huaiqing W, Quan L (2013) Financial time series forecasting using LPP and SVM optimized by PSO. Soft Comput 17:805–818

    Article  Google Scholar 

  7. Huang G-B, Zhu Q-Y, Siew C-K (2006) Real-time learning capability of neural networks. IEEE Trans Neural Netw 17:863–878

    Article  Google Scholar 

  8. Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489–501

    Article  Google Scholar 

  9. He Y-L, Geng Z-Q, Xu Y, Zhu Q-X (2015) A robust hybrid model integrating enhanced inputs based extreme learning machine with PLSR (PLSR-EIELM) and its application to intelligent measurement. ISA Trans 58:533–542

  10. Peng Y, Wang S, Long X, Lu B-L (2015) Discriminative graph regularized extreme learning machine and its application to face recognition. Neurocomputing 149:340–353

    Article  Google Scholar 

  11. Na W, Zhu Q, Su Z, Jiang Q (2015) Research on well production prediction based on improved extreme learning machine. Int J Model Identif Control 23:238–247

    Article  Google Scholar 

  12. Wong KI, Vong CM, Wong PK, Luo J (2015) Sparse Bayesian extreme learning machine and its application to biofuel engine performance prediction. Neurocomputing 149:397–404

    Article  Google Scholar 

  13. Shamshirband S, Mohammadi K, Tong CW, Petković D, Porcu E, Mostafaeipour A et al (2015) Application of extreme learning machine for estimation of wind speed distribution. Clim Dyn 1–15. doi:10.1007/s00382-015-2682-2

  14. Wang W, Yu L, Liu H, Sun F (2015) Extreme learning machine for linear dynamical systems classification: application to human activity recognition. In: Proceedings of ELM-2014, vol 2. Springer, Berlin, pp 11–20

  15. Daliri MR (2012) A hybrid automatic system for the diagnosis of lung cancer based on genetic algorithm and fuzzy extreme learning machines. J Med Syst 36:1001–1005

    Article  Google Scholar 

  16. Daliri MR (2015) Combining extreme learning machines using support vector machines for breast tissue classification. Comput Methods Biomech Biomed Eng 18:185–191

    Article  Google Scholar 

  17. Miche Y, Sorjamaa A, Bas P, Simula O, Jutten C, Lendasse A (2010) OP-ELM: optimally pruned extreme learning machine. IEEE Trans Neural Netw 21:158–162

    Article  Google Scholar 

  18. Gonzalez-Carrasco I, Garcia-Crespo A, Ruiz-Mezcua B, Lopez-Cuadrado JL (2012) An optimization methodology for machine learning strategies and regression problems in ballistic impact scenarios. Appl Intell 36:424–441

    Article  Google Scholar 

  19. Gonzalez-Carrasco I, Garcia-Crespo A, Ruiz-Mezcua B, Lopez-Cuadrado JL, Colomo-Palacios R (2014) Towards a framework for multiple artificial neural network topologies validation by means of statistics. Expert Syst 31:20–36

    Article  Google Scholar 

  20. Lai KK, Yu L, Wang S, Wei H (2006) A novel nonlinear neural network ensemble model for financial time series forecasting. In: Computational science—ICCS 2006. Springer, Berlin, pp 790–793

  21. Kim D, Kim C (1997) Forecasting time series with genetic fuzzy predictor ensemble. IEEE Trans Fuzzy Syst 5:523–535

    Article  Google Scholar 

  22. Qian B, Rasheed K (2010) Foreign exchange market prediction with multiple classifiers. J Forecast 29:271–284

    MathSciNet  MATH  Google Scholar 

  23. Khashei M, Bijari M (2012) A new class of hybrid models for time series forecasting. Expert Syst Appl 39:4344–4357

    Article  Google Scholar 

  24. Hernández-Lobato D, Martínez-Muñoz G, Suárez A (2011) Empirical analysis and evaluation of approximate techniques for pruning regression bagging ensembles. Neurocomputing 74:2250–2264

    Article  Google Scholar 

  25. Margineantu DD, Dietterich TG (1997) Pruning adaptive boosting. In: ICML, 1997, pp 211–218

  26. Prodromidis AL, Stolfo SJ (2001) Cost complexity-based pruning of ensemble classifiers. Knowl Inf Syst 3:449–469

    Article  MATH  Google Scholar 

  27. Martınez-Munoz G, Suárez A (2004) Aggregation ordering in bagging. In: Proceedings of the IASTED international conference on artificial intelligence and applications, 2004, pp 258–263

  28. Zhou Z-H, Tang W (2003) Selective ensemble of decision trees. In: Rough sets, fuzzy sets, data mining, and granular computing. Springer, Berlin, pp 476–483

  29. Caruana R, Niculescu-Mizil A, Crew G, Ksikes A (2004) Ensemble selection from libraries of models. In: Proceedings of the twenty-first international conference on machine learning, 2004, p 18

  30. Banfield RE, Hall LO, Bowyer KW, Kegelmeyer WP (2005) Ensemble diversity measures and their application to thinning. Inf Fusion 6:49–62

    Article  Google Scholar 

  31. Martínez-Muñoz G, Suárez A (2006) Pruning in ordered bagging ensembles. In: Proceedings of the 23rd international conference on machine learning, 2006, pp 609–616

  32. Martínez-Muñoz G, Suárez A (2007) Using boosting to prune bagging ensembles. Pattern Recognit Lett 28:156–165

    Article  Google Scholar 

  33. Zhou Z-H, Wu J, Tang W (2002) Ensembling neural networks: many could be better than all. Artif Intell 137:239–263

    Article  MathSciNet  MATH  Google Scholar 

  34. Ma ZC, Dai Q, Liu NZ (2015) Several novel evaluation measures for rank-based ensemble pruning with applications to time series prediction. Expert Syst Appl 42:280–292

    Article  Google Scholar 

  35. Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12:993–1001

    Article  Google Scholar 

  36. Grigorievskiy A, Miche Y, Ventelä A-M, Séverin E, Lendasse A (2014) Long-term time series prediction using OP-ELM. Neural Netw 51:50–56

    Article  MATH  Google Scholar 

  37. Zhao G, Shen Z, Miao C, Gay RK (2008) Enhanced extreme learning machine with stacked generalization. In: IJCNN, 2008, pp 1191–1198

  38. Huang G-B (2003) Learning capability and storage capacity of two-hidden-layer feedforward networks. IEEE Trans Neural Netw 14:274–281

    Article  Google Scholar 

  39. Huang G-B, Zhu Q-Y, Siew C-K (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In: 2004 IEEE international joint conference on neural networks, 2004. Proceedings, pp 985–990

  40. Ledezma A, Aler R, Sanchis A, Borrajo D (2010) GA-stacking: evolutionary stacked generalization. Intell Data Anal 14:89–119

    Google Scholar 

  41. Dietterich TG (2000) Ensemble methods in machine learning. In: Multiple classifier systems. Springer, Heidelberg, pp 1–15

  42. Ting KM, Witten IH (1999) Issues in stacked generalization. J Art Intel Res 10:271–289

  43. Dzeroski S, Zenko B (2002) Is combining classifiers better than selecting the best one? In: ICML, 2002, pp 123–130

  44. Merz CJ (1999) Using correspondence analysis to combine classifiers. Mach Learn 36:33–58

    Article  Google Scholar 

  45. Wolpert DH (1992) Stacked generalization. Neural Netw 5:241–259

    Article  Google Scholar 

  46. Tamon C, Xiang J (2000) On the boosting pruning problem. In: Machine learning: ECML 2000. Springer, Berlin, pp 404–412

  47. Partalas I, Tsoumakas G, Vlahavas I (2009) Pruning an ensemble of classifiers via reinforcement learning. Neurocomputing 72:1900–1909

    Article  Google Scholar 

  48. Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51:181–207

    Article  MATH  Google Scholar 

  49. Martinez-Munoz G, Hernández-Lobato D, Suárez A (2009) An analysis of ensemble pruning techniques based on ordered aggregation. IEEE Trans Pattern Anal Mach Intell 31:245–259

    Article  Google Scholar 

  50. Assaad M, Boné R, Cardot H (2008) A new boosting algorithm for improved time-series forecasting with recurrent neural networks. Inf Fusion 9:41–55

    Article  Google Scholar 

  51. Yahoo Finance. http://finance.yahoo.com/. Accessed 7 July 2015

  52. CrossValidated. http://stats.stackexchange.com/questions/14099/using-k-fold-cross-validation-for-time-series-model-selection. Accessed 7 July 2015

  53. Neto A, Calvalcanti G, Ren TI (2009) Financial time series prediction using exogenous series and combined neural networks. In: International joint conference on neural networks, 2009. IJCNN 2009, pp 149–156

  54. Root-mean-square deviation. http://en.wikipedia.org/wiki/Root-mean-square_deviation. Accessed 7 July 2015

  55. Mean absolute percentage error. http://en.wikipedia.org/wiki/Mean_absolute_percentage_error. Accessed 7 July 2015

Download references

Acknowledgments

This work is supported by the National Natural Science Foundation of China under the Grant No. 61473150.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qun Dai.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, Z., Dai, Q. Selected an Stacking ELMs for Time Series Prediction. Neural Process Lett 44, 831–856 (2016). https://doi.org/10.1007/s11063-016-9499-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-016-9499-9

Keywords

Navigation