Skip to main content

Multilayer Perceptrons Which Are Tolerant to Multiple Faults and Learnings to Realize Them

  • Chapter
Transactions on Computational Science XXII

Part of the book series: Lecture Notes in Computer Science ((TCOMPUTATSCIE,volume 8360))

Abstract

We discuss a fault-tolerance of multilayer perceptrons in which input and output learning examples are patterns consisting of 0s and 1s. A type of faults to be dealt with is a multiple neuron and/or weight fault where neurons are in the hidden layer and weights are between the hidden and output layers. We theoretically analyze the condition when a multilayer perceptron is tolerant to multiple neuron and weight faults. According to the analysis, we propose two value injection methods denoted as VIM-WN and VIM-N to make multilayer perceptrons tolerant to all multiple neuron and/or weight faults whose values are in a multi-dimensional interval. In VIM-WN, the extreme values specified by the fault ranges are set to the outputs of the selected neurons and the selected weights of the links at the same time in a learning phase. In VIM-N, the extreme values specified by the fault ranges are set only to the outputs of the selected neurons likewise. First, we present an algorithm based on VIM-WN and prove that a multilayer perceptron which has successfully finished learning by VIM-MN is tolerant to all multiple neuron-and-weight faults whose values are in the interval, under the condition that the multiplicity of the multiple fault is within a certain number specified by faulty neurons and weights. Next, we present them concerning VIM-N likewise. By simulation, we confirm the analytical results for VIM-WN and VIM-N. We also by simulation examine the degrees of fault tolerance concerning multiple neuron-and-weight faults for VIM-N and VIM-W where VIM-W is the method proposed in [1] and show that VIM-N and WIM-W as well as VIM-WN are almost equally effective in coping with multiple neuron-and-weight faults. In addition, we show the data in terms of the learning time, successful rate of learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Takanami, I., Oyama, Y.: A novel learning algorithm which makes multilayer neural networks multiple-weight-fault tolerant. IEICE Trans. Inf. & Syst. E86-D(12), 2536–2543 (2003)

    Google Scholar 

  2. Phatak, D.S., Koren, I.: Complete and partial fault tolerance of feedforward neural nets. IEEE Trans. Neural Networks 6(2), 446–456 (1995)

    Article  Google Scholar 

  3. Fahlman, S.E., et al.: Neural nets learning algorithms and benchmarks database. Maintained by S.E. Fahlman et.al. at the Computer Science Dept., Carnegie Mellon University

    Google Scholar 

  4. Nijhuis, J., Hoefflinger, B., van Schaik, A., Spaanenburg, L.: Limits to the fault-tolerance of a feedforward neural network with learning. In: Proc. Int’l Symp. on FTCS, pp. 228–235 (1990)

    Google Scholar 

  5. Tan, Y., Nanya, T.: A faut-tolerant multi-layer neural network model and its properties. IEICE D-I J76-D-I(7), 380–389 (1993) (in Japanese)

    Google Scholar 

  6. Clay, R.D., Séquin, C.H.: Fault tolerance training improves generalization and robustness. In: Proc. Int’l. J. Conf. on Neural Networks, pp. I-769–I-774 (1992)

    Google Scholar 

  7. Ito, T., Takanami, I.: On fault injection approaches for fault tolerance of feedforward neural networks. In: Proc. Int’l Symp. on ATS, pp. 88–93 (1997)

    Google Scholar 

  8. Hammadi, N.C., Ito, H.: A learning algorithm for fault tolerant feedforward neural networks. IEICE Trans. Inf & Syst. E80-D(1), 21–26 (1997)

    Google Scholar 

  9. Hammadi, N.C., Ohmameuda, T., Kaneko, K., Ito, H.: Dynamic constructive fault tolerant algorithm for feedforward neural networks. IEICE Trans. Inf & Syst. E81-D(1), 115–123 (1998)

    Google Scholar 

  10. Cavalieri, S., Mirabella, O.: A novel learning algorithm which impoves the partial fault tolerance of multilayer neural networks. Neural Networks (Pergamon) 12(1), 91–106 (1999)

    Article  Google Scholar 

  11. Kamiura, N., Hata, Y., Matsui, N.: Fault tolerant feedforward neural networks with learning algorithm based on synaptic weight limit. In: Proc. IEEE Int’l Workshop on On-Line Testing, pp. 222–226 (1999)

    Google Scholar 

  12. Kamiura, N., Taniguchi, Y., Hata, Y., Matsui, N.: A learning algorithm with activation function manipulation for fault tolerant neural networks. IEICE Trans. Inf. & Syst. E84-D(7), 899–905 (2001)

    Google Scholar 

  13. Takase, H., Kita, H., Hayashi, T.: Weight minimization approach for fault tolerant multi-layer neural networks. In: Proc. of Int’l J. Conf. on Neural Networks, pp. 2656–2660 (2001)

    Google Scholar 

  14. Horita, T., Takanami, I., Mori, M.: Learning algorithms which make multilayer neural networks multiple-weight-and-neuron-fault tolerant. IEICE Trans. Inf. & Syst. E91-D(4), 1168–1175 (2008)

    Article  Google Scholar 

  15. Sum, J.P., Leung, C.S., Ho, K.I.J.: On-line node fault injection training algorithm for MLP networks: Objective function and convergence analysis. IEEE Trans. Neural Networks and Learning Systems 23(2), 211–222 (2012)

    Article  Google Scholar 

  16. Ho, K., Leung, C.S., Sum, J.: Objective functions of online weight noise injection training algorithms for MLPs. IEEE Trans. Neural Networks 22(2), 317–323 (2011)

    Article  Google Scholar 

  17. Ho, K.I.J., Leung, C.S., Sum, J.: Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks. IEEE Trans. Neural Networks 21(6), 938–947 (2010)

    Article  Google Scholar 

  18. Sum, J.P.F., Leung, C.S., Ho, K.I.J.: On objective function, regularizer, and prediction error of a learning algorithm for dealing with multiplicative weight noise. IEEE Trans. Neural Networks 20(1), 124–138 (2009)

    Article  Google Scholar 

  19. Murray, A.F., Edwards, P.J.: Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training. IEEE Trans. Neural Networks 5(5), 792–802 (1994)

    Article  Google Scholar 

  20. Nishimura, K., Horita, T., Ootsu, M., Takanami, I.: Novel value injection learning methods which make multilayer neural networks multiple-weight-and-neuron-fault tolerant. In: Proc. CSREA Int’l Conf. on PDPTA, pp. 546–552 (July 2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Horita, T., Takanami, I., Nishimura, K. (2014). Multilayer Perceptrons Which Are Tolerant to Multiple Faults and Learnings to Realize Them. In: Gavrilova, M.L., Tan, C.J.K. (eds) Transactions on Computational Science XXII. Lecture Notes in Computer Science, vol 8360. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-54212-1_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-54212-1_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-54211-4

  • Online ISBN: 978-3-642-54212-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics