Abstract
This paper proposes a modified ELM algorithm that properly selects the input weights and biases before training the output weights of single-hidden layer feedforward neural networks with sigmoidal activation function and proves mathematically the hidden layer output matrix maintains full column rank. The modified ELM avoids the randomness compared with the ELM. The experimental results of both regression and classification problems show good performance of the modified ELM algorithm.










Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Cybenko G (1989) Approximation by superposition of sigmoidal function. Math Control Signals Syst 2:303–314
Funahashi KI (1989) On the approximate realization of continuous mappings by neural networks. Neural Netw 2:183–192
Hornik K (1991) Approximation capabilities of multilayer feedforward networks. Neural Netw 4:251–257
Cao FL, Xie TF, Xu ZB (2008) The estimate for approximation error of neural networks: a constructive approach. Neurocomputing 71:626–630
Cao FL, Zhang YQ, He ZR (2009) Interpolation and rate of convergence by a class of neural networks. Appl Math Model 33:1441–1456
Cao FL, Zhang R (2009) The errors of approximation for feedforward neural networks in the Lp metric. Math Comput Model 49:1563–1572
Cao FL, Lin SB, Xu ZB (2010) Approximation capabilities of interpolation neural networks. Neurocomputing 74:457–460
Xu ZB, Cao FL (2004) The essential order of approximation for neural networks. Sci China Ser F Inf Sci 47:97–112
Xu ZB, Cao FL (2005) Simultaneous L p approximation order for neural networks. Neural Netw 18:914–923
Chen TP, Chen H (1995) Approximation capability to functions of several variables, nonlinear functionals, and operators by radial basis function neural networks. IEEE Trans Neural Netw 6:904–910
Chen TP, Chen H (1995) Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE Trans Neural Netw 6:911–917
Hahm N, Hong BI (2004) An approximation by neural networks with a fixed weight. Comput Math Appl 47:1897–1903
Lan Y, Soh YC, Huang GB (2010, April) Random search enhancement of error minimized extreme learning machine. In: ESANN 2010 proceedings, European symposium on artificial neural networks—computational intelligence and machine learning, pp 327–332
Huang GB, Babri HA (1998) Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Trans Neural Netw 9:224–229
Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70:489–501
Huang GB, Zhu QY, Siew CK (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In: 2004 IEEE international joint conference on neural networks, vol 2, pp 985–990
Bartlett PL (1998) The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans Inf Theory 44:525–536
Feng G, Huang GB, Lin Q, Gay R (2009) Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans Neural Netw 20:1352–1357
Huang GB, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70:3056–3062
Huang GB, Chen L (2008) Enhanced random search based incremental extreme learning machine. Neurocomputing 71:3460–3468
Huang GB, Chen L, Siew CK (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17:879–892
Wang YG, Cao FL, Yuan YB (2011) A study on effectiveness of extreme learning machine. Neurocomputing 74(16):2483–2490
Huang GB (2003) Learning capability and storage capacity of two-hidden-layer feedforward networks. IEEE Trans Neural Netw 14:274–281
Rao CR, Mitra SK (1971) Generalized inverse of matrices and its applications. Wiley, New York
Rätsch G, Onoda T, Müller KR (1998) An improvement of AdaBoost to avoid overfitting. In: Proceedings of the 5th international conference on neural information processing (ICONIP 1998)
Romero E, Alquézar R (2002) A new incremental method for function approximation using feed-forward neural networks. In: Proceedings of the 2002 international joint conference on neural networks (IJCNN’2002), pp 1968–1973
Serre D (2000) Matrices: theory and applications. Springer, New York
Frank A, Asuncion A (2010) UCI machine learning repository. University of California, School of Information and Computer Science, Irvine. http://archive.ics.uci.edu/ml
Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: Machine learning: proceedings of the 13th international conference, pp 148–156
Wilson DR, Martinez TR (1996, June) Heterogeneous radial basis function networks. In: IEEE international conference on neural networks (ICNN’96), pp 1263–1267
Acknowledgments
We would thank Feilong Cao for his suggestions on this paper. The support of the National Natural Science Foundation of China (Nos. 90818020, 10871226, 61179041) is gratefully acknowledged.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Chen, Z.X., Zhu, H.Y. & Wang, Y.G. A modified extreme learning machine with sigmoidal activation functions. Neural Comput & Applic 22, 541–550 (2013). https://doi.org/10.1007/s00521-012-0860-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-012-0860-2