Abstract
Incremental extreme learning machine has been verified that it has the universal approximation capability. However, there are two major issues lowering its efficiency: one is that some “random” hidden nodes are inefficient which decrease the convergence rate and increase the structural complexity, the other is that the final output weight vector is not the minimum norm least-squares solution which decreases the generalization capability. To settle these issues, this paper proposes a simple and efficient algorithm in which the parameters of even hidden nodes are calculated by fitting the residual error vector in the previous phase, and then, all existing output weights are recursively updated based on inverse partitioned matrix. The algorithm can reduce the inefficient hidden nodes and obtain a preferable output weight vector which is always the minimum norm least-squares solution. Theoretical analyses and experimental results show that the proposed algorithm has better performance on convergence rate, generalization capability and structural complexity than other incremental extreme learning machine algorithms.
Similar content being viewed by others
References
Hornik K (1991) Approximation capabilities of multilayer feedforward. Neural Netw 4:251–257. https://doi.org/10.1016/0893-6080(91)90009-T
Leshno M, Lin V-Y, Pinkus A, Schocken S (1993) Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw 6:861–867. https://doi.org/10.1016/S0893-6080(05)80131-5
Ito Y (1992) Approximation of continuous functions on rd by linear combinations of shifted rotations of a sigmoid function with and without scaling. Neural Netw 5:105–115. https://doi.org/10.1016/S0893-6080(05)80009-7
Huang G-B, Babri H (1998) Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Trans Neural Netw 9:224–229. https://doi.org/10.1109/72.655045
Teoh E-J, Xiang C, Tan K-C (2006) Estimating the number of hidden neurons in a feedforward network using the singular value decomposition. IEEE Trans Neural Netw 17:1623–1629. https://doi.org/10.1109/TNN.2006.880582
Kwok T, Yeung D (1997) Objective functions for training new hidden units in constructive neural networks. IEEE Trans Neural Netw 8:1131–1148. https://doi.org/10.1109/72.623214
Huang G-B, Babri H (1997) General approximation theorem on feedforward networks. In: International conference on information, vol 2, pp 698–702
Huang G-B, Zhu Q-Y, Siew C-K (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In: IEEE international joint conference, vol 2. IEEE, pp 985–990. https://doi.org/10.1109/IJCNN.2004.1380068
Huang G-B, Zhu Q-Y, Siew C-K (2006) Extreme learning machine: Theory and applications. In: Brazilian symposium on neural networks, vol 70, pp 489–501. https://doi.org/10.1016/j.neucom.2005.12.126
Huang G-B, Zhu Q-Y, Siew C-K (2006) Real-time learning capability of neural networks. IEEE Trans Neural Netw 17:863–878. https://doi.org/10.1109/TNN.2006.875974
Gao J-F, Wang Z, Yang Y, Zhang W-J, Tao C-Y, Guan J-N, Rao N-N (2013) A novel approach for lie detection based on f-score and extreme learning machine. PLoS One 8(6):e64704. https://doi.org/10.1371/journal.pone.0064704
Karpagachelvi S, Arthanari M, Sivakumar M (2012) Classification of electrocardiogram signals with support vector machines and extreme learning machine. Neural Comput Appl 21(6):1331–1339. https://doi.org/10.1007/s00521-011-0572-z
An L, Bhanu B (2012) Image super-resolution by extreme learning machine. In: 2012 19th IEEE international conference on image processing, IEEE Signal Process. IEEE, pp 2209–2212. https://doi.org/10.1109/ICIP.2012.6467333
Bazi Y, Alajlan N, Melgani F, Hichri H, Malek S, Yager R-R (2014) Differential evolution extreme learning machine for the classification of hyperspectral images. IEEE Geosci Remote Sens Lett 11(6):1066–1070. https://doi.org/10.1109/LGRS.2013.2286078
Cambria E, Gastaldo P, Bisio F, Zunino R (2015) An ELM-based model for affective analogical reasoning. Neurocomputing 149:443–455. https://doi.org/10.1016/j.neucom.2014.01.064
Rong H-J, Jia Y-X, Zhao G-S (2014) Aircraft recognition using modular extreme learning machine. Neurocomputing 128:166–174. https://doi.org/10.1016/j.neucom.2012.12.064
Rong H-J, Huang G-B, Ong Y-S (2008) Extreme learning machine for multi-categories classification applications. In: IJCNN. https://doi.org/10.1109/IJCNN.2008.4634028
Huang G-B, Zhou H-M, Ding X-J, Zhang R (2011) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern 42(2):513–529
Huang G-B, Chen L, Siew C-K (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17:879–892. https://doi.org/10.1109/TNN.2006.875977
Huang G-B, Chen L (2008) Enhanced random search based incremental extreme learning machine. Neurocomputing 71:3460–3468. https://doi.org/10.1016/j.neucom.2007.10.008
Rong H-J, Ong Y-S, Tan A-H, Zhu Z-X (2008) A fast pruned-extreme learning machine for classification problem. Neurocomputing 72:359–366. https://doi.org/10.1016/j.neucom.2008.01.005
Miche Y, Sorjamaa A, Lendasse A (2008) OP-ELM: theory, experiments and a toolbox. In: ICANN 2008. Springer, pp 145–154
Miche Y, Sorjamaa A, Bas P, Simula O, Jutten C, Lendasse A (2010) OP-ELM: optimally pruned extreme learning machine. IEEE Trans Neural Netw 21(1):158–162
Yang Y-M, Wang Y-N, Yuan X-F (2012) Bidirectional extreme learning machine for regression problem and its learning effectiveness. IEEE Trans Neural Netw Learn Syst 23:1498–1505. https://doi.org/10.1109/TNNLS.2012.2202289
Huang G-B, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70:3056–3062. https://doi.org/10.1016/j.neucom.2007.02.009
Barron A (1993) Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans Inf Theory 39(3):930–945. https://doi.org/10.1109/18.256500
Feng G-R, Huang G-B, Lin Q-P, Gay R (2009) Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans Neural Netw 20:1352–1357. https://doi.org/10.1109/TNN.2009.2024147
Ye Y-B, Qin Y (2015) Qr factorization based incremental extreme learning machine with growth of hidden nodes. Pattern Recognit Lett 65:177–183. https://doi.org/10.1016/j.patrec.2015.07.031
Li P (2016) Orthogonal incremental extreme learning machine for regression and multiclass classification. Neural Comput Appl 27:111–120. https://doi.org/10.1007/s00521-014-1567-3
Serre D (2002) Matrices: theory and applications. Springer, New York. https://doi.org/10.1007/978-1-4419-7683-3
Meyer C-D (2001) Matrix analysis and applied linear algebra. SIAM, Philadelphia
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zeng, G., Yao, F. & Zhang, B. Inverse partitioned matrix-based semi-random incremental ELM for regression. Neural Comput & Applic 32, 14263–14274 (2020). https://doi.org/10.1007/s00521-019-04289-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-019-04289-4