Skip to main content
Log in

Length-Changeable Incremental Extreme Learning Machine

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Extreme learning machine (ELM) is a learning algorithm for generalized single-hidden-layer feed-forward networks (SLFNs). In order to obtain a suitable network architecture, Incremental Extreme Learning Machine (I-ELM) is a sort of ELM constructing SLFNs by adding hidden nodes one by one. Although kinds of I-ELM-class algorithms were proposed to improve the convergence rate or to obtain minimal training error, they do not change the construction way of I-ELM or face the over-fitting risk. Making the testing error converge quickly and stably therefore becomes an important issue. In this paper, we proposed a new incremental ELM which is referred to as Length-Changeable Incremental Extreme Learning Machine (LCI-ELM). It allows more than one hidden node to be added to the network and the existing network will be regarded as a whole in output weights tuning. The output weights of newly added hidden nodes are determined using a partial error-minimizing method. We prove that an SLFN constructed using LCI-ELM has approximation capability on a universal compact input set as well as on a finite training set. Experimental results demonstrate that LCI-ELM achieves higher convergence rate as well as lower over-fitting risk than some competitive I-ELM-class algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Castano A, Fernández-Navarro F, Herv´as Martinez C. PCA-ELM: A robust and pruned extreme learning machine approach based on principal component analysis. Neural Processing Letters, 2013, 37(3): 377-392.

    Article  Google Scholar 

  2. Chen H, Gong Y, Hong X. Online modeling with tunable RBF network. IEEE Transactions on Cybernetics, 2013, 43(3): 935-947.

    Article  Google Scholar 

  3. Frénay B, Verleysen M. Using SVMs with randomised feature spaces: An extreme learning approach. In Proc. the 18th European Symposium on Artificial Neural Networks, Apr. 2010, pp.315-320.

  4. Shin Y, Ghosh J. Approximation of multivariate functions using ridge polynomial networks. In Proc. International Joint Conference on Neural Networks, June 1992, pp.380-385.

  5. Park B J, KimWD, Oh S K, PedryczW. Fuzzy set-oriented neural networks based on fuzzy polynomial inference and dynamic genetic optimization. Knowledge and Information Systems, 2014, 39(1): 207-240.

    Article  Google Scholar 

  6. [6] Han F, Huang D S. Improved extreme learning machine for function approximation by encoding a priori information. Neurocomputing, 2006, 69(16/17/18): 2369-2373.

    Article  Google Scholar 

  7. Lin F J, Hung Y C, Ruan K C. An intelligent second-order sliding-mode control for an electric power steering system using a wavelet fuzzy neural network. IEEE Transactions on Fuzzy Systems, 2014, 22(6): 1598-1611.

    Article  Google Scholar 

  8. Capizzi G, Capizzi C, Bonanno F. Innovative secondgeneration wavelets construction with recurrent neural networks for solar radiation forecasting. IEEE Transactions on Neural Networks and Learning Systems, 2012, 23(11): 1805-1815.

    Article  Google Scholar 

  9. Hornik K. Approximation capabilities of multilayer feedforward networks. Neural Networks, 1991, 4(2): 251-257.

    Article  Google Scholar 

  10. Leshno M, Lin V Y, Pinkus A, Schocken S. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 1993, 6(6): 861-867.

    Article  Google Scholar 

  11. Park J, Sandberg I W. Universal approximation using radial-basis-function networks. Neural Computation, 1991, 3(2): 246-257.

    Article  Google Scholar 

  12. Huang G B, Zhu Q Y, Siew C K. Extreme learning machine: Theory and applications. Neurocomputing, 2006, 70(1/2/3): 489-501.

    Article  Google Scholar 

  13. [13] Huang G B, Zhou H, Ding X, Zhang R. Extreme learning machine for regression and multiclass classification. IEEE Transactions on Systems, Man, Cybernetics, Part B (Cybernetics), 2012, 42(2): 513-529.

    Article  Google Scholar 

  14. Wang S J, Chen H L, Yan W J, Chen Y H, Fu X L. Face recognition and micro-expression recognition based on discriminant tensor subspace analysis plus extreme learning machine. Neural Processing Letters, 2014, 39(1): 25-43.

    Article  Google Scholar 

  15. Liu D,Wu Y, Jiang H. FP-ELM: An online sequential learning algorithm for dealing with concept drift. Neurocomputing, 2016, 207(26): 322-334.

    Google Scholar 

  16. Han D H, Zhang X, Wang G R. Classifying uncertain and evolving data streams with distributed extreme learning machine. Journal of Computer Science and Technology, 2015, 30(4): 874-887.

    Article  MathSciNet  Google Scholar 

  17. Zhang T, Dai Q, Ma Z. Extreme learning machines’ ensemble selection with GRASP. Applied Intelligence, 2015, 43(2): 439-459.

    Article  Google Scholar 

  18. Nie L, Jiang H, Ren Z et al. Query expansion based on crowd knowledge for code search. IEEE Transactions on Services Computing, 2016, 9(5): 771-783.

    Article  Google Scholar 

  19. Deng C W, Huang G B, Xu J et al. Extreme learning machines: New trends and applications. Science China Information Sciences, 2015, 58(2): 1-16.

    Article  Google Scholar 

  20. Jiang H, Nie L, Sun Z et al. ROSF: Leveraging information retrieval and supervised learning for recommending code snippets. IEEE Transactions on Services Computing, 2016. doi:10.1109/TSC.2016.2592909

    Google Scholar 

  21. Huang G B, Chen L, Siew C K. Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Transactions on Neural Networks, 2006, 17(4): 879-892.

    Article  Google Scholar 

  22. Wang N, Han M, Dong N, Er M J. Constructive multioutput extreme learning machine with application to large tanker motion dynamics identification. Neurocomputing, 2014, 128: 59-72.

    Article  Google Scholar 

  23. Feng G, Huang G B, Lin Q, Gay R. Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Transactions on Neural Networks, 2009, 20(8): 1352-1357.

    Article  Google Scholar 

  24. Wang N, Er M J, Han M. Parsimonious extreme learning machine using recursive orthogonal least squares. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25(10): 1828-1841.

    Article  Google Scholar 

  25. Miche Y, Sorjamaa A, Bas P, Simula O, Jutten C, Lendasse A. OP-ELM: Optimally pruned extreme learning machine. IEEE Transactions on Neural Networks, 2010, 21(1): 158-162.

    Article  Google Scholar 

  26. Luo X, Liu F, Yang S, Wang X, Zhou Z. Joint sparse regularization based sparse semi-supervised extreme learning machine (S3ELM) for classification. Knowledge-Based Systems, 2015, 73: 149-160.

    Article  Google Scholar 

  27. Zhang R, Lan Y, Huang G B, Xu Z B. Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems, 2012, 23(2): 365-371.

    Article  Google Scholar 

  28. Zhang R, Lan Y, Huang G B, Xu Z B, Soh Y C. Dynamic extreme learning machine and its approximation capability. IEEE Transactions on Cybernetics, 2013, 43(6): 2054-2065.

    Article  Google Scholar 

  29. Feng G, Lan Y, Zhang X et al. Dynamic adjustment of hidden node parameters for extreme learning machine. IEEE Transactions on Cybernetics, 2015, 45(2): 279-288.

    Article  Google Scholar 

  30. Yang Y,Wu Q M J. Extreme learning machine with subnetwork hidden nodes for regression and classification. IEEE Transactions on Cybernetics, 2016, 46(12): 2885-2898.

    Article  Google Scholar 

  31. [31] Huang G B, Chen L. Convex incremental extreme learning machine. Neurocomputing, 2007, 70(16/17/18): 3056-3062.

    Article  Google Scholar 

  32. [32] Huang G B, Chen L. Enhanced random search based incremental extreme learning machine. Neurocomputing, 2008, 71(16/17/18): 3460-3468.

    Article  Google Scholar 

  33. Xu Z, Yao M, Wu Z, Dai W. Incremental regularized extreme learning machine and it’s enhancement. Neurocomputing, 2016, 174: 134-142.

    Article  Google Scholar 

  34. Kolmogorov A N, Fomin S V. Elements of the Theory of Functions and Functional Analysis: Measure. Graylock Press, 1961.

  35. Kwok T Y, Yeung D Y. Objective functions for training new hidden units in constructive neural networks. IEEE Transactions on Neural Networks, 1997, 8(5): 1131-1148.

    Article  Google Scholar 

  36. Micchelli C A. Interpolation of scattered data: Distance matrices and conditionally positive definite functions. Constructive Approximation, 1986, 2: 11-22.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to He Jiang.

Electronic supplementary material

Below is the link to the electronic supplementary material.

ESM 1

(PDF 51 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, YX., Liu, D. & Jiang, H. Length-Changeable Incremental Extreme Learning Machine. J. Comput. Sci. Technol. 32, 630–643 (2017). https://doi.org/10.1007/s11390-017-1746-7

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-017-1746-7

Keywords

Navigation