Skip to main content

Fast Construction of Single-Hidden-Layer Feedforward Networks

  • Reference work entry
Handbook of Natural Computing

Abstract

In this chapter, two major issues are addressed: (i) how to obtain a more compact network architecture and (ii) how to reduce the overall computational complexity. An integrated analytic framework is introduced for the fast construction of single-hidden-layer feedforward networks (SLFNs) with two sequential phases. The first phase of the algorithm focuses on the computational efficiency for fast computation of the unknown parameters and fast selection of the hidden nodes. The second phase focuses on improving the performance of the network obtained in the first phase. The proposed algorithm is evaluated on several benchmark problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 999.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 1,199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Akaike H (1974) New look at the statistical model identification. IEEE Trans Automat Cont AC-19(1): 716–723

    Article  MathSciNet  Google Scholar 

  • Ampazis N, Perantonis SJ (2002) Two highly efficient second-order algorithms for training feedforward networks. IEEE Trans Neural Netw 13(3): 1064–1074

    Article  Google Scholar 

  • Bishop CM (1995) Neural networks for pattern recognition. Clarendon Press, Oxford

    Google Scholar 

  • Blake C, Merz C (1998) UCI repository of machine learning databases. In: Department of Information and Computer Sciences, University of California, Irvine, CA. http://www.ics.uci.edu/∼mlearn/MLRepository.html

  • Chen S, Wigger J (1995) Fast orthogonal least squares algorithm for efficient subset model selection. IEEE Trans Signal Process 43(7):1713–1715

    Article  Google Scholar 

  • Chen S, Billings SA, Luo W (1989) Orthogonal least squares methods and their application to non-linear system identification. Int J Control 50(5):1873–1896

    Article  MathSciNet  MATH  Google Scholar 

  • Chen S, Cowan CFN, Grant PM (1991) Orthogonal least squares learning algorithm for radial basis functions. IEEE Trans Neural Netw 2:302–309

    Article  Google Scholar 

  • Chng ES, Chen S, Mulgrew B (1996) Gradient radial basis function networks for nonlinear and nonstationary time series prediction. IEEE Trans Neural Netw 7(1):190–194

    Article  Google Scholar 

  • Gomm JB, Yu DL (March 2000) Selecting radial basis function network centers with recursive orthogonal least squares training. IEEE Trans Neural Netw 11(2):306–314

    Article  Google Scholar 

  • Hagan MT, Menhaj MB (1994) Training feedforward networks with the Marquardt algorithm. IEEE Trans Neural Netw 5(6):989–993

    Article  Google Scholar 

  • Handoko SD, Keong KC, Soon OY, Zhang GL, Brusic V (2006) Extreme learning machine for predicting HLA-peptide binding. Lect Notes Comput Sci 3973:716–721

    Article  Google Scholar 

  • Hong X, Mitchell RJ, Chen S, Harris CJ, Li K, Irwin G (2008) Model selection approaches for nonlinear system identification: a review. Int J Syst Sci 39(10):925–946

    Article  MathSciNet  MATH  Google Scholar 

  • Huang G-B, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70(16–18):3056–3062

    Google Scholar 

  • Huang G-B, Saratchandran P, Sundararajan N (2005) A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation. IEEE Trans Neural Netw 16(1):57–67

    Article  Google Scholar 

  • Huang G-B, Chen L, Siew C-K (2006a) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892

    Article  Google Scholar 

  • Huang G-B, Zhu Q-Y, Siew C-K (2006b) Extreme learning machine: theory and applications. Neurocomputing 70:489–501

    Article  Google Scholar 

  • Huang G-B, Zhu Q-Y, Mao KZ, Siew C-K, Saratchandran P, Sundararajan N (2006c) Can threshold networks be trained directly? IEEE Trans Circuits Syst II 53(3):187–191

    Article  Google Scholar 

  • Kadirkamanathan V, Niranjan M (1993) A function estimation approach to sequential learning with neural networks. Neural Comput 5:954–975

    Article  Google Scholar 

  • Korenberg MJ (1988) Identifying nonlinear difference equation and functional expansion representations: the fast orthogonal algorithm. Ann Biomed Eng 16:123–142

    Article  Google Scholar 

  • Lawson L, Hanson RJ (1974) Solving least squares problem. Prentice-Hall, Englewood Cliffs, NJ

    Google Scholar 

  • Li K, Peng J, Irwin GW (2005) A fast nonlinear model identification method. IEEE Trans Automa Cont 50(8):1211–1216

    Article  MathSciNet  Google Scholar 

  • Li K, Peng J, Bai EW (2006) A two-stage algorithm for identification of nonlinear dynamic systems. Automatica 42(7):1189–1197

    Article  MathSciNet  MATH  Google Scholar 

  • Li K, Peng J, Bai E (2009) Two-stage mixed discrete-continuous identification of radial basis function (RBF) neural models for nonlinear systems. IEEE Trans Circuits Syst I Regular Pap 56(3):630–643

    Google Scholar 

  • Li M-B, Huang G-B, Saratchandran P, Sundararajan N (2005) Fully complex extreme learning machine. Neurocomputing 68:306–314

    Article  Google Scholar 

  • Liang N-Y, Huang G-B, Saratchandran P, Sundararajan N (2006a) A fast and accurate on-line sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17(6):1411–1423

    Google Scholar 

  • Liang N-Y, Saratchandran P, Huang G-B, Sundararajan N (2006b) Classification of mental tasks from EEG signals using extreme learning machine. Int J Neural Syst 16(1):29–38

    Article  Google Scholar 

  • Liu Y, Loh HT, Tor SB (2005) Comparison of extreme learning machine with support vector machine for text classification. Lect Notes Comput Sci 3533:390–399

    Article  Google Scholar 

  • Mackey MC, Glass L (1977) Oscillation and chaos in physiological control systems. Science 197:287–289

    Article  Google Scholar 

  • Mao KZ, Huang G-B (2005) Neuron selection for RBF neural network classifier based on data structure preserving criterion. IEEE Trans Neural Netw 16(6):1531–1540

    Article  Google Scholar 

  • Marquardt D (1963) An algorithm for least-squares estimation of nonlinear parameters. SIAM J Appl Math 11:431–441

    Article  MathSciNet  MATH  Google Scholar 

  • McLoone S, Brown MD, Irwin GW, Lightbody G (1998) A hybrid linear/nonlinear training algorithm for feedforward neural networks. IEEE Trans Neural Netw 9:669–684

    Article  Google Scholar 

  • Miller AJ (1990) Subset selection in regression. Chapman & Hall, London

    MATH  Google Scholar 

  • Musavi M, Ahmed W, Chan K, Faris K, Hummels D (1992) On training of radial basis function classifiers. Neural Netw 5:595–603

    Article  Google Scholar 

  • Panchapakesan C, Palaniswami M, Ralph D, Manzie C (2002) Effects of moving the centers in an RBF network. IEEE Trans Neural Netw 13:1299–1307

    Article  Google Scholar 

  • Peng H, Ozaki T, Haggan-Ozaki V, Toyoda Y (2003) A parameter optimization method for radial basis function type models. IEEE Trans Neural Netw 14:432–438

    Article  Google Scholar 

  • Peng J, Li K, Huang DS (2006) A hybrid forward algorithm for RBF neural network construction. IEEE Trans Neural Netw 17(11):1439–1451

    Article  Google Scholar 

  • Peng J, Li K, Irwin GW (2007) A novel continuous forward algorithm for RBF neural modelling. IEEE Trans Automat Cont 52(1):117–122

    Article  MathSciNet  Google Scholar 

  • Peng J, Li K, Irwin GW (2008) A new Jacobian matrix for optimal learning of single-layer neural nets. IEEE Trans Neural Netw 19(1):119–129

    Article  Google Scholar 

  • Piroddi L, Spinelli W (2003) An identification algorithm for polynomial NARX models based on simulation error minimization. Int J Cont 76:1767–1781

    Article  MathSciNet  MATH  Google Scholar 

  • Platt J (1991) A resource-allocating network for function interpolation. Neural Comput 3:213–225

    Article  MathSciNet  Google Scholar 

  • Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1992) Numerical recipes in C: the art of scientific computing. Cambridge University Press, Cambridge

    Google Scholar 

  • Rao CR, Mitra SK (1971) Generalized inverse of matrices and its applications. Wiley, New York

    MATH  Google Scholar 

  • Serre D (2002) Matrices: theory and applications. Springer, New York

    MATH  Google Scholar 

  • Sutanto EL, Mason JD, Warwick K (1997) Mean-tracking clustering algorithm for radial basis function centre selection. Int J Control 67:961–977

    Article  MathSciNet  MATH  Google Scholar 

  • Wang D, Huang G-B (2005) Protein sequence classification using extreme learning machine. In: Proceedings of international joint conference on neural networks (IJCNN2005), Montreal, Canada, 31 July – 4 August 2005

    Google Scholar 

  • Xu J-X, Wang W, Goh JCH, Lee G (2005) Internal model approach for gait modeling and classification. In: the 27th annual international conference of the IEEE, Engineering in Medicine and Biology Society (EMBS), Shanghai, China, 1–4 September 2005

    Google Scholar 

  • Yeu C-WT, Lim M-H, Huang G-B, Agarwal A, Ong Y-S (2006) A new machine learning paradigm for terrain reconstruction. IEEE Geosci Rem Sens Lett 3(3):382–386

    Article  Google Scholar 

  • Yingwei L, Sundararajan N, Saratchandran P (1997) A sequential learning scheme for function approximation using minimal radial basis function (RBF) neural networks. Neural Comput 9:461–478

    Article  MATH  Google Scholar 

  • Zhang GL, Billings SA (1996) Radial basis function network configuration using mutual information and the orthogonal least squares algorithm. Neural Netw 9:1619–1637

    Article  Google Scholar 

  • Zhang R, Huang G-B, Sundararajan N, Saratchandran P (2007) Multi-category classification using an extreme learning machine for microarray gene expression cancer diagnosis. IEEE/ACM Trans Comput Biol Bioinform 4(3):485–495

    Article  Google Scholar 

  • Zhu QM, Billings SA (1996) Fast orthogonal identification of nonlinear stochastic models and radial basis function neural networks. Int J Control 64(5):871–886

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgment

K. Li would like to acknowledge the helpful comments from Lei Chen of the National University of Singapore. He would also like to acknowledge the support of the International Exchange program of Queen's University Belfast.

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this entry

Cite this entry

Li, K., Huang, GB., Ge, S.S. (2012). Fast Construction of Single-Hidden-Layer Feedforward Networks. In: Rozenberg, G., Bäck, T., Kok, J.N. (eds) Handbook of Natural Computing. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-92910-9_16

Download citation

Publish with us

Policies and ethics