Abstract
We discuss a fault-tolerance of multilayer perceptrons in which input and output learning examples are patterns consisting of 0s and 1s. A type of faults to be dealt with is a multiple neuron and/or weight fault where neurons are in the hidden layer and weights are between the hidden and output layers. We theoretically analyze the condition when a multilayer perceptron is tolerant to multiple neuron and weight faults. According to the analysis, we propose two value injection methods denoted as VIM-WN and VIM-N to make multilayer perceptrons tolerant to all multiple neuron and/or weight faults whose values are in a multi-dimensional interval. In VIM-WN, the extreme values specified by the fault ranges are set to the outputs of the selected neurons and the selected weights of the links at the same time in a learning phase. In VIM-N, the extreme values specified by the fault ranges are set only to the outputs of the selected neurons likewise. First, we present an algorithm based on VIM-WN and prove that a multilayer perceptron which has successfully finished learning by VIM-MN is tolerant to all multiple neuron-and-weight faults whose values are in the interval, under the condition that the multiplicity of the multiple fault is within a certain number specified by faulty neurons and weights. Next, we present them concerning VIM-N likewise. By simulation, we confirm the analytical results for VIM-WN and VIM-N. We also by simulation examine the degrees of fault tolerance concerning multiple neuron-and-weight faults for VIM-N and VIM-W where VIM-W is the method proposed in [1] and show that VIM-N and WIM-W as well as VIM-WN are almost equally effective in coping with multiple neuron-and-weight faults. In addition, we show the data in terms of the learning time, successful rate of learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Takanami, I., Oyama, Y.: A novel learning algorithm which makes multilayer neural networks multiple-weight-fault tolerant. IEICE Trans. Inf. & Syst. E86-D(12), 2536–2543 (2003)
Phatak, D.S., Koren, I.: Complete and partial fault tolerance of feedforward neural nets. IEEE Trans. Neural Networks 6(2), 446–456 (1995)
Fahlman, S.E., et al.: Neural nets learning algorithms and benchmarks database. Maintained by S.E. Fahlman et.al. at the Computer Science Dept., Carnegie Mellon University
Nijhuis, J., Hoefflinger, B., van Schaik, A., Spaanenburg, L.: Limits to the fault-tolerance of a feedforward neural network with learning. In: Proc. Int’l Symp. on FTCS, pp. 228–235 (1990)
Tan, Y., Nanya, T.: A faut-tolerant multi-layer neural network model and its properties. IEICE D-I J76-D-I(7), 380–389 (1993) (in Japanese)
Clay, R.D., Séquin, C.H.: Fault tolerance training improves generalization and robustness. In: Proc. Int’l. J. Conf. on Neural Networks, pp. I-769–I-774 (1992)
Ito, T., Takanami, I.: On fault injection approaches for fault tolerance of feedforward neural networks. In: Proc. Int’l Symp. on ATS, pp. 88–93 (1997)
Hammadi, N.C., Ito, H.: A learning algorithm for fault tolerant feedforward neural networks. IEICE Trans. Inf & Syst. E80-D(1), 21–26 (1997)
Hammadi, N.C., Ohmameuda, T., Kaneko, K., Ito, H.: Dynamic constructive fault tolerant algorithm for feedforward neural networks. IEICE Trans. Inf & Syst. E81-D(1), 115–123 (1998)
Cavalieri, S., Mirabella, O.: A novel learning algorithm which impoves the partial fault tolerance of multilayer neural networks. Neural Networks (Pergamon) 12(1), 91–106 (1999)
Kamiura, N., Hata, Y., Matsui, N.: Fault tolerant feedforward neural networks with learning algorithm based on synaptic weight limit. In: Proc. IEEE Int’l Workshop on On-Line Testing, pp. 222–226 (1999)
Kamiura, N., Taniguchi, Y., Hata, Y., Matsui, N.: A learning algorithm with activation function manipulation for fault tolerant neural networks. IEICE Trans. Inf. & Syst. E84-D(7), 899–905 (2001)
Takase, H., Kita, H., Hayashi, T.: Weight minimization approach for fault tolerant multi-layer neural networks. In: Proc. of Int’l J. Conf. on Neural Networks, pp. 2656–2660 (2001)
Horita, T., Takanami, I., Mori, M.: Learning algorithms which make multilayer neural networks multiple-weight-and-neuron-fault tolerant. IEICE Trans. Inf. & Syst. E91-D(4), 1168–1175 (2008)
Sum, J.P., Leung, C.S., Ho, K.I.J.: On-line node fault injection training algorithm for MLP networks: Objective function and convergence analysis. IEEE Trans. Neural Networks and Learning Systems 23(2), 211–222 (2012)
Ho, K., Leung, C.S., Sum, J.: Objective functions of online weight noise injection training algorithms for MLPs. IEEE Trans. Neural Networks 22(2), 317–323 (2011)
Ho, K.I.J., Leung, C.S., Sum, J.: Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks. IEEE Trans. Neural Networks 21(6), 938–947 (2010)
Sum, J.P.F., Leung, C.S., Ho, K.I.J.: On objective function, regularizer, and prediction error of a learning algorithm for dealing with multiplicative weight noise. IEEE Trans. Neural Networks 20(1), 124–138 (2009)
Murray, A.F., Edwards, P.J.: Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training. IEEE Trans. Neural Networks 5(5), 792–802 (1994)
Nishimura, K., Horita, T., Ootsu, M., Takanami, I.: Novel value injection learning methods which make multilayer neural networks multiple-weight-and-neuron-fault tolerant. In: Proc. CSREA Int’l Conf. on PDPTA, pp. 546–552 (July 2009)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Horita, T., Takanami, I., Nishimura, K. (2014). Multilayer Perceptrons Which Are Tolerant to Multiple Faults and Learnings to Realize Them. In: Gavrilova, M.L., Tan, C.J.K. (eds) Transactions on Computational Science XXII. Lecture Notes in Computer Science, vol 8360. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-54212-1_3
Download citation
DOI: https://doi.org/10.1007/978-3-642-54212-1_3
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-54211-4
Online ISBN: 978-3-642-54212-1
eBook Packages: Computer ScienceComputer Science (R0)