Skip to main content
Log in

A Dynamic ELM with Balanced Variance and Bias for Long-Term Online Prediction

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

For long-term online prediction of nonlinear time series, how to determine a feasible network architecture that conforms to the time-varying data stream is recognized to be a challenging problem. To deal with the issue, a dynamic ELM with balanced variance and bias has been proposed. A suitable fitting degree, which contains model applicability at sequential learning phase, is taken into consideration. Based on the shifting error of the sequence fragment, the automatic model update strategy is exploited. Transformable parameters help reduce the overfitting and underfitting at the same time, and avoid the trial and error caused by user intervention effectively, so as to guarantee the feasibility for long-term online prediction. Furthermore, hidden node number and the regularization parameter can be calculated according to the fast-changing test data, thus building an optimum network architecture quantificationally. Experimental results verify that the proposed algorithm has better generalization performance on various long-term regression problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Golestaneh F, Pinson P, Gooi HB (2016) Very short-term nonparametric probabilistic forecasting of renewable energy generation—with application to solar energy. IEEE Trans Power Syst 31(5):3850–3863

    Article  Google Scholar 

  2. Hu W, Yan L, Liu K et al (2016) A short-term traffic flow forecasting method based on the hybrid PSO-SVR. Neural Process Lett 43(1):155–172

    Article  Google Scholar 

  3. Kumar P, Martani C, Morawska L et al (2016) Indoor air quality and energy management through real-time sensing in commercial buildings. Energy Build 111:145–153

    Article  Google Scholar 

  4. Tian HX, Mao ZZ (2010) An ensemble ELM based on modified AdaBoost. RT algorithm for predicting the temperature of molten steel in ladle furnace. IEEE Trans Autom Sci Eng 7(1):73–80

    Article  Google Scholar 

  5. Cawley GC, Talbot NLC (2010) On over-fitting in model selection and subsequent selection bias in performance evaluation. J Mach Learn Res 11(Jul):2079–2107

    MathSciNet  MATH  Google Scholar 

  6. Ding S, Li Y, Zhu J et al (2015) Sequential sample consensus: a robust algorithm for video-based face recognition. IEEE Trans Circuits Syst Video Technol 25(10):1586–1598

    Article  Google Scholar 

  7. Trillos NG, Murray R (2017) A new analytical approach to consistency and overfitting in regularized empirical risk minimization. Eur J Appl Math 28(6):886–921

    Article  MathSciNet  MATH  Google Scholar 

  8. Srivastava N, Hinton GE, Krizhevsky A et al (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  9. Richards SA, Whittingham MJ, Stephens PA (2011) Model selection and model averaging in behavioural ecology: the utility of the IT-AIC framework. Behav Ecol Sociobiol 65(1):77–89

    Article  Google Scholar 

  10. Huang G, Huang GB, Song S et al (2015) Trends in extreme learning machines: a review. Neural Netw 61:32–48

    Article  MATH  Google Scholar 

  11. Liang NY, Huang GB, Saratchandran P et al (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17(6):1411–1423

    Article  Google Scholar 

  12. Liu D, Wu YX, Jiang H (2016) FP-ELM: An online sequential learning algorithm for dealing with concept drift. Neurocomputing 207:322–334

    Article  Google Scholar 

  13. Huang GB, Chen L, Siew CK (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892

    Article  Google Scholar 

  14. Huang GB, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70(16):3056–3062

    Article  Google Scholar 

  15. Feng G, Huang GB, Lin Q et al (2009) Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans Neural Netw 20(8):1352–1357

    Article  Google Scholar 

  16. Cao J, Lin Z, Huang G-B (2012) Self-adaptive evolutionary extreme learning machine. Neural Process Lett 36(3):285–305

    Article  Google Scholar 

  17. Bai Z, Huang GB, Wang D et al (2014) Sparse extreme learning machine for classification. IEEE Trans Cybern 44(10):1858–1870

    Article  Google Scholar 

  18. Castaño A, Fernández-Navarro F, Hervás-Martínez C (2013) PCA-ELM: a robust and pruned extreme learning machine approach based on principal component analysis. Neural Process Lett 37(3):377–392

    Article  Google Scholar 

  19. Zhang R, Lan Y, Huang GB et al (2013) Dynamic extreme learning machine and its approximation capability. IEEE Trans Cybern 43(6):2054–2065

    Article  Google Scholar 

  20. Zhang R, Lan Y, Huang G et al (2012) Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Trans Neural Netw Learn Syst 23(2):365–371

    Article  Google Scholar 

  21. Grigorievskiy A, Miche Y, Ventelä AM et al (2014) Long-term time series prediction using OP-ELM. Neural Netw 51:50–56

    Article  MATH  Google Scholar 

  22. Savitha R, Suresh S, Kim HJ (2014) A meta-cognitive learning algorithm for an extreme learning machine classifier. Cogn Comput 6(2):253–263

    Article  Google Scholar 

  23. Figueiredo EMN, Ludermir TB (2014) Investigating the use of alternative topologies on performance of the PSO-ELM. Neurocomputing 127:4–12

    Article  Google Scholar 

  24. Han F, Zhao MR, Zhang JM et al (2017) An improved incremental constructive single-hidden-layer feedforward networks for extreme learning machine based on particle swarm optimization. Neurocomputing 228:133–142

    Article  Google Scholar 

  25. Du KL, Swamy MNS (2016) Particle swarm optimization. In: Search and optimization by metaheuristics. Springer, pp 153–173

  26. Han M, Zhang R, Xu M (2017) Multivariate chaotic time series prediction based on ELM-PLSR and hybrid variable selection algorithm. Neural Process Lett 46(2):705–717

    Article  Google Scholar 

  27. Cao J, Lin Z (2015) Extreme learning machines on high dimensional and large data applications: a survey. Math Probl Eng 2015:1–13

    MATH  Google Scholar 

  28. Zhai J, Shao Q, Wang X (2016) Architecture selection of ELM networks based on sensitivity of hidden nodes. Neural Process Lett 44(2):471–489

    Article  Google Scholar 

  29. Shao Z, Er MJ, Wang N (2016) An efficient leave-one-out cross-validation-based extreme learning machine (ELOO-ELM) with minimal user intervention. IEEE Trans Cybern 46(8):1939–1951

    Article  Google Scholar 

  30. Wu HC (2007) The Karush–Kuhn–Tucker optimality conditions in an optimization problem with interval-valued objective function. Eur J Oper Res 176(1):46–59

    Article  MathSciNet  MATH  Google Scholar 

  31. Taieb SB, Atiya AF (2016) A bias and variance analysis for multistep-ahead time series forecasting. IEEE Trans Neural Netw Learn Syst 27(1):62–76

    Article  MathSciNet  Google Scholar 

  32. Lever J, Krzywinski M, Altman N (2016) Points of significance: model selection and overfitting. Nat Methods 13(9):703–704

    Article  Google Scholar 

  33. Hothorn T, Lausen B (2003) Bagging tree classifiers for laser scanning images: a data-and simulation-based strategy. Artif Intell Med 27(1):65–79

    Article  Google Scholar 

  34. Frank A, Asuncion A (2017) UCI machine learning repository. University California Irvine. http://archive.ics.uci.edu/ml

Download references

Acknowledgements

The work was supported by the National Key Research Project of China under Grant No. 2016YFB1001304, the National Natural Science Foundation of China under Grant 61572229, the JLUSTIRT High-level Innovation Team, and the Fundamental Research Funds for Central Universities under Grant No. 2017TD-19. The authors gratefully acknowledge financial support from the Research Centre for Intelligent Signal Identification and Equipment, Jilin Province.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaoying Sun.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yu, H., Sun, X. & Wang, J. A Dynamic ELM with Balanced Variance and Bias for Long-Term Online Prediction. Neural Process Lett 49, 1257–1271 (2019). https://doi.org/10.1007/s11063-018-9865-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-018-9865-x

Keywords

Navigation