Skip to main content
Log in

Extreme learning machine for interval neural networks

  • Extreme Learning Machine and Applications
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Interval data offer a valuable way of representing the available information in complex problems where uncertainty, inaccuracy, or variability must be taken into account. Considered in this paper is the learning of interval neural networks, of which the input and output are vectors with interval components, and the weights are real numbers. The back-propagation (BP) learning algorithm is very slow for interval neural networks, just as for usual real-valued neural networks. Extreme learning machine (ELM) has faster learning speed than the BP algorithm. In this paper, ELM is applied for learning of interval neural networks, resulting in an interval extreme learning machine (IELM). There are two steps in the ELM for usual feedforward neural networks. The first step is to randomly generate the weights connecting the input and the hidden layers, and the second step is to use the Moore–Penrose generalized inversely to determine the weights connecting the hidden and output layers. The first step can be directly applied for interval neural networks. But the second step cannot, due to the involvement of nonlinear constraint conditions for IELM. Instead, we use the same idea as that of the BP algorithm to form a nonlinear optimization problem to determine the weights connecting the hidden and output layers of IELM. Numerical experiments show that IELM is much faster than the usual BP algorithm. And the generalization performance of IELM is much better than that of BP, while the training error of IELM is a little bit worse than that of BP, implying that there might be an over-fitting for BP.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Moore RE (1966) Interval analysis. Prentice-Hall, Englewood Cliffs

    MATH  Google Scholar 

  2. Sunaga T (1958) Theory of an interval algebra and its applications to numerical analysis. RAAG Mem 2:29–46

    Google Scholar 

  3. Yeung DS, Ng WWY, Wang DF, Tsang ECC, Wang XZ (2007) Localized generalization error model and its application to architecture selection for radial basis function neural network. IEEE Trans Neural Netw 18:1294–1305

    Article  Google Scholar 

  4. Tsang ECC, Wang XZ, Yeung DS (2000) Improving learning accuracy of fuzzy decision trees by hybrid neural networks. IEEE Trans Fuzzy Syst 8:601–614

    Article  Google Scholar 

  5. Ishibuchi H, Tanaka H (1991) An extension of the BP algorithm to interval input vectors. In: Proceedings of IJCNN’91, Singapore, pp 1588–1593

  6. Hemandez CA, Espf J, Nakayama K (1993) Interval arithmetic backpropagation. In: Proceedings of 1993 international joint conference on neural networks, Nagoya, pp 375–378

  7. Roque AMC, Mate C, Arroyo J, Sarabia A (2007) iMLPApplying multi-layer perceptrons to interval-valued data. Neural Process Lett 25:157–169

    Article  Google Scholar 

  8. Chetwynd D, Worden K, Manson G (2006) An application of interval-valued neural networks to a regression problem. Proc R Soc A: Math Phys Eng Sci 462:3097–3114

    Article  MATH  MathSciNet  Google Scholar 

  9. Jeng JT, Chuang CC, Su SF (2003) Support vector interval regression networks for interval regression analysis. Fuzzy Sets Syst 138:283–300

    Article  MATH  MathSciNet  Google Scholar 

  10. Jiang FF, Shen JH, Li XD (2013) The LMI method for stationary oscillation of interval neural networks with three neuron activations under impulsive effects. Nonlinear Anal-Real World Appl 14(3):1404–1416

    Article  MATH  MathSciNet  Google Scholar 

  11. Yang DK, Wu W (2012) A smoothing interval neural network. Discret Dyn Nat Soc 2012. doi:10.1155/2012/456919

  12. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagation errors. Nat Biotechnol 323:533–536

    Article  Google Scholar 

  13. Rumelhart DE, McClelland JL et al (1986) Parallel distributed processing. The MIT Press, Cambridge

    Google Scholar 

  14. Cao JW, Lin ZP, Huang GB, Liu N (2012) Voting based extreme learning machine. Inf Sci 185:66–77

    Article  MathSciNet  Google Scholar 

  15. Zhai JH, Xu HY, Wang XZ (2012) Dynamic ensemble extreme learning machine based on sample entropy. Soft Comput 16:1493–1502

    Article  Google Scholar 

  16. He Q, Shang TF, Zhuang FZ, Shi ZZ (2013) Parallel extreme learning machine for regression based on MapReduce. Neurocomputing 102:52–58

    Article  Google Scholar 

  17. Chacko BP, Vimal Krishnan VR, Raju G, Babu Anto P (2012) Handwritten character recognition using wavelet energy and extreme learning machine. Int J Mach Learn Cybern 3:149–161

    Article  Google Scholar 

  18. Wu J, Wang ST, Chung FL (2011) Positive and negative fuzzy rule system, extreme learning machine and image classification. Int J Mach Learn Cybern 2:261–271

    Article  Google Scholar 

  19. Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489–501

    Article  Google Scholar 

  20. Huang GB, Wang DH (2011) Extreme learning machines: a survey. Int J Mach Learn Cybern 2:107–122

    Article  Google Scholar 

  21. Huang GB, Zhou HM, Ding XJ, Zhang R (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern Part B Cybern 42:513–529

    Article  Google Scholar 

  22. Shao HM, Zheng GF (2011) Convergence analysis of a back-propagation algorithm with adaptive momentum. Neurocomputing 74:749–752

    Article  Google Scholar 

  23. Wu W, Wang J, Cheng MS, Li ZX (2011) Convergence analysis of online gradient method for BP neural networks. Neural Netw 24:91–98

    Article  MATH  Google Scholar 

  24. Wang J, Yang J, Wu W (2011) Convergence of cyclic and almost-cyclic learning with momentum for feedforward neural networks. IEEE Trans Neural Netw 22:1297–1306

    Article  Google Scholar 

Download references

Acknowledgments

This work is supported by the National Natural Science Foundation of China (11171367) and the Fundamental Research Funds for the Central Universities of China.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Wu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yang, D., Li, Z. & Wu, W. Extreme learning machine for interval neural networks. Neural Comput & Applic 27, 3–8 (2016). https://doi.org/10.1007/s00521-013-1519-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-013-1519-3

Keywords

Navigation