Abstract
The extreme learning machine (ELM) framework provides an efficient way for constructing single-hidden-layer feedforward networks (SLFNs). Its main idea is that the input bias terms and the input weights of the hidden nodes are selected in a random way. During training, we only need to adjust the output weights of the hidden nodes. The existing incremental learning algorithms, called incremental-ELM (I-ELM) and convex I-ELM (CI-ELM), for extreme learning machines (ELMs) cannot handle the fault situation. This paper proposes two fault-tolerant incremental ELM algorithms, namely fault-tolerant I-ELM (FTI-ELM) and fault-tolerant CI-ELM (FTCI-ELM). The FTI-ELM only tunes the output weight of the newly additive node to minimize the training set error of faulty networks. It keeps all the previous learned weights unchanged. Its fault-tolerant performance is better than that of I-ELM and CI-ELM. To further improve the performance, the FTCI-ELM is proposed. It tunes the output weight of the newly additive node, as well as using a simple scheme to modify the existing output weights, to maximize the reduction in the training set error of faulty networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Hornik, K.: Approximation capabilities of multilayer feedforward networks. Neural Netw. 4(2), 251–257 (1991)
Huang, G.B., Chen, L., Siew, C.K.: Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 17(4), 879–892 (2006)
Huang, G.B., Chen, L.: Convex incremental extreme learning machine Guang-Bin. Neurocomputing 70, 3056–3062 (2007)
Burr, J.: Digital neural network implementations, in Neural Networks, Concepts, Applications, and Implementations, pp. 237–285. Prentice Hall, Englewood Cliffs (1995)
Liu, B., Kaneko, K.: Error analysis of digital filter realized with floating-point arithmetic. Proc. IEEE 57(10), 1735–1747 (1969)
Bernier, J.L., Ortega, J., Rojas, I., Ros, E., Prieto, A.: Obtaining fault tolerant multilayer perceptrons using an explicit regularization. Neural Process. Lett. 12(2), 107–113 (2000)
Leung, C.-S., Hongjiang, W., John, S.: On the selection of weight decay parameter for faulty networks. IEEE Trans. Neural Netw. 21(8), 1232–1244 (2010)
Leung, C.-S., Sum, J.: RBF networks under the concurrent fault situation. IEEE Trans. Neural Netw. Learn. Syst. 23(7), 1148–1155 (2012)
Leung, C.-S., Wan, W.Y., Feng, R.: A regularizer approach for RBF networks under the concurrent weight failure situation. IEEE Trans. Neural Netw. Learn. Syst. (Accepted)
Kwok, T.Y., Yeung, D.Y.: Objective functions for training new hidden units in constructive neural networks. IEEE Trans. Neural Netw. 8(5), 11311148 (1997)
Sugiyama, M., Ogawa, H.: Optimal design of regularization term and regularization parameter by subspace information criterion. Neural Netw. 15(3), 349–361 (2002)
Lichman, M.: UCI machine learning repository. http://archive.ics.uci.edu/ml (2013)
I-Cheng, Y.: Analysis of strength of concrete using design of experiments and neural networks. J. Mater. Civ. Eng. 18(4), 597–604 (2006)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this paper
Cite this paper
Leung, HC., Leung, CS., Wong, E.W.M. (2016). Fault-Tolerant Incremental Learning for Extreme Learning Machines. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9948. Springer, Cham. https://doi.org/10.1007/978-3-319-46672-9_20
Download citation
DOI: https://doi.org/10.1007/978-3-319-46672-9_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-46671-2
Online ISBN: 978-3-319-46672-9
eBook Packages: Computer ScienceComputer Science (R0)