Skip to main content
Log in

Low complexity adaptive forgetting factor for online sequential extreme learning machine (OS-ELM) for application to nonstationary system estimations

  • Extreme Learning Machine's Theory & Application
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Huang et al. (2004) has recently proposed an on-line sequential ELM (OS-ELM) that enables the extreme learning machine (ELM) to train data one-by-one as well as chunk-by-chunk. OS-ELM is based on recursive least squares-type algorithm that uses a constant forgetting factor. In OS-ELM, the parameters of the hidden nodes are randomly selected and the output weights are determined based on the sequentially arriving data. However, OS-ELM using a constant forgetting factor cannot provide satisfactory performance in time-varying or nonstationary environments. Therefore, we propose an algorithm for the OS-ELM with an adaptive forgetting factor that maintains good performance in time-varying or nonstationary environments. The proposed algorithm has the following advantages: (1) the proposed adaptive forgetting factor requires minimal additional complexity of O(N) where N is the number of hidden neurons, and (2) the proposed algorithm with the adaptive forgetting factor is comparable with the conventional OS-ELM with an optimal forgetting factor.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Huang G-B, Zhu Q-Y, Siew C-K (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In: Proceedings of the international joint conference on neural networks, Budapest, pp 25–29

  2. Park J, Sandberg IW (1991) Universal approximation using radial-basis-function networks. Neural Comput 3:246–257

    Article  Google Scholar 

  3. Barron AR (1993) Universal approximation bounds for superpositions of a sigmoid function. IEEE Trans Inf Theory 39:930–945

    Article  MathSciNet  MATH  Google Scholar 

  4. Leshno M, Lin VY, Pinkus A, Schocken S (1993) Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw 6:861–867

    Article  Google Scholar 

  5. Huang G-B, Wang DH, Lan Y (2011) Extreme learning machines: a survey. Int J Mach Lean Cybern 2:107–122

    Article  Google Scholar 

  6. Lowe D (1989) Adaptive radial basis function nonlinearities and the problem of generalisation. In: Proceedings of the first IEE international conference on artificial neural networks, London, pp 171–175

  7. Igelnik B, Pao Y-H (1995) Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans Neural Netw 6:1320–1329

    Article  Google Scholar 

  8. Baum E (1988) On the capabilities of multilayer perceptrons. J Complex 4:193–215

    Article  MathSciNet  MATH  Google Scholar 

  9. Ferrari S, Stengel RF (2005) Smooth function approximation using neural networks. IEEE Trans Neural Netw 16:24–38

    Article  Google Scholar 

  10. Huang G-B, Li M-B, Chen L, Siew C-K (2008) Incremental extreme learning machine with fully complex hidden nodes. Neurocomputing 71:576–583

    Article  Google Scholar 

  11. ELM web portal. http://www.ntu.edu.sg/home/egbhuang

  12. Lim J, Jeon J, Lee S (2006) Recursive complex extreme learning machine with widely linear processing for nonlinear channel equalizer. LNCS 3973:128–134

    Google Scholar 

  13. Liang N-Y, Huang G-B, Saratchandran P, Sundararajan N (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17:1411–1423

    Article  Google Scholar 

  14. Huang G-B, Zhu Q-Y, Mao KZ, Siew C-K, Saratchandran P, Sundararajan N (2006) Can threshold networks be trained directly? IEEE Trans Circuits Syst II Exp Briefs 53:187–191

    Article  Google Scholar 

  15. Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: theory and applications. Neurocomputing 70:1–3

    Article  Google Scholar 

  16. Huang G-B, Chen L, Siew C-K (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17:879–892

    Article  Google Scholar 

  17. Haykin S (2002) Adaptive filter theory, 4th edn. Prentice Hall, NJ

    Google Scholar 

  18. Song S, Sung KM (2007) Reduced complexity self-tuning adaptive algorithms in application to channel estimation. IEEE Trans Commun 55:1448–1452

    Article  Google Scholar 

  19. Lee S, Lim J, Sung K-M (2009) A low-complexity AFF-RLS algorithm using a normalization technique. IEICE Electron Exp 6:1774–1780

    Article  Google Scholar 

  20. Niedzwiecki M (2000) Idenrification of time-varying process. Wiley, West Sussex

    Google Scholar 

  21. Paleologu C, Benesty J, Ciochina S (2008) A robust variable forgetting factor recursive least-squares algorithm for system identification. IEEE Signal Process Lett 15:597–600

    Article  Google Scholar 

  22. Tuan P, Lee S, Hou W (1997) An efficient on-line thermal input estimation method using Kalman filter and recursive least square algorithm. Inverse Probl Eng 5:309–333

    Article  Google Scholar 

  23. Kim H-S, Lim J-S, Baek S, Sung K-M (2001) Robust Kalman filtering with variable forgetting factor against impulsive noise. IEICE Trans Fundam E84-A:363–366

    Google Scholar 

  24. Yang B (1995) Projection approximation subspace tracking. IEEE Trans Signal Process 43:95–107

    Article  MATH  Google Scholar 

  25. Lee K, Gan W, Kuo S (2009) Subband adaptive filtering theory and implementation. Wiley, West Sussex

    Book  Google Scholar 

  26. Huang G-B, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70:3056–3062

    Article  Google Scholar 

  27. Han K, Lee S, Lim J, Sung K (2004) Channel estimation for OFDM with fast fading channels by modified Kalman filter. IEEE Trans Consumer Electron 50:443–449

    Article  Google Scholar 

  28. Rappaport T (1996) Wireless communications principles and practice. Prentice Hall, NJ

    Google Scholar 

  29. Adali T, Liu X (1997) Canonical piecewise linear network for nonlinear filtering and its application to blind equalization. Signal Process 61:145–155

    Article  Google Scholar 

  30. Holland PW, Welch RE (1997) Robust regression using iterative reweighted least squares. Commun Stat Theory Methods A 6:813–827

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the National Research Foundation of Korea(NRF) grant funded.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun-seok Lim.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lim, Js., Lee, S. & Pang, HS. Low complexity adaptive forgetting factor for online sequential extreme learning machine (OS-ELM) for application to nonstationary system estimations. Neural Comput & Applic 22, 569–576 (2013). https://doi.org/10.1007/s00521-012-0873-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-012-0873-x

Keywords

Navigation