Skip to main content

On Node-Fault-Injection Training of an RBF Network

  • Conference paper
Book cover Advances in Neuro-Information Processing (ICONIP 2008)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5507))

Included in the following conference series:

Abstract

While injecting fault during training has long been demonstrated as an effective method to improve fault tolerance of a neural network, not much theoretical work has been done to explain these results. In this paper, two different node-fault-injection-based on-line learning algorithms, including (1) injecting multinode fault during training and (2) weight decay with injecting multinode fault, are studied. Their almost sure convergence will be proved and thus their corresponding objective functions are deduced.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. An, G.: The effects of adding noise during backpropagation training on a generalization performance. Neural Computation 8, 643–674 (1996)

    Article  Google Scholar 

  2. Bernier, J.L., et al.: Obtaining fault tolerance multilayer perceptrons using an explicit regularization. Neural Processing Letters 12, 107–113 (2000)

    Article  MATH  Google Scholar 

  3. Bishop, C.M.: Training with noise is equivalent to Tikhonov regularization. Neural Computation 7, 108–116 (1995)

    Article  Google Scholar 

  4. Bolt, G.: Fault tolerant in multi-layer Perceptrons. PhD Thesis, University of York, UK (1992)

    Google Scholar 

  5. Cavalieri, S., Mirabella, O.: A novel learning algorithm which improves the partial fault tolerance of multilayer NNs. Neural Networks 12, 91–106 (1999)

    Article  Google Scholar 

  6. Chandra, P., Singh, Y.: Fault tolerance of feedforward artificial neural networks – A framework of study. In: Proceedings of IJCNN 2003, vol. 1, pp. 489–494 (2003)

    Google Scholar 

  7. Chiu, C.T., et al.: Modifying training algorithms for improved fault tolerance. In: ICNN 1994, vol. I, pp. 333–338 (1994)

    Google Scholar 

  8. Deodhare, D., Vidyasagar, M., Sathiya Keerthi, S.: Synthesis of fault-tolerant feedforward neural networks using minimax optimization. IEEE Transactions on Neural Networks 9(5), 891–900 (1998)

    Article  Google Scholar 

  9. Gladyshev, E.: On stochastic approximation. Theory of Probability and its Applications 10, 275–278 (1965)

    Article  Google Scholar 

  10. Grandvalet, Y., Canu, S.: A comment on noise injection into inputs in back-propagation learning. IEEE Transactions on Systems, Man, and Cybernetics 25(4), 678–681 (1995)

    Article  Google Scholar 

  11. Grandvalet, Y., Canu, S., Boucheron, S.: Noise injection : Theoretical prospects. Neural Computation 9(5), 1093–1108 (1997)

    Article  Google Scholar 

  12. Hammadi, N.C., Hideo, I.: A learning algorithm for fault tolerant feedforward neural networks. IEICE Transactions on Information & Systems E80-D(1) (1997)

    Google Scholar 

  13. Ho, K., Leung, C.S., Sum, J.: On weight-noise-injection training. In: Köppen, M., Kasabov, N., Coghill, G. (eds.) ICONIP 2008. LNCS, vol. 5507, pp. 919–926. Springer, Heidelberg (2009)

    Google Scholar 

  14. Kamiura, N., et al.: On a weight limit approach for enhancing fault tolerance of feedforward neural networks. IEICE Transactions on Information & Systems E83-D(11) (2000)

    Google Scholar 

  15. Lai, T.L.: Stochastic approximation. Annals of Statistics 31(2), 391–406 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  16. Leung, C.S., Sum, J.: A fault tolerant regularizer for RBF networks. IEEE Transactions on Neural Networks 19(3), 493–507 (2008)

    Article  Google Scholar 

  17. Neti, C., Schneider, M.H., Young, E.D.: Maximally fault tolerance neural networks. IEEE Transactions on Neural Networks 3(1), 14–23 (1992)

    Article  Google Scholar 

  18. Phatak, D.S., Koren, I.: Complete and partial fault tolerance of feedforward neural nets. IEEE Transactions on Neural Networks 6, 446–456 (1995)

    Article  Google Scholar 

  19. Reed, R., Marks II, R.J., Oh, S.: Similarities of error regularization, sigmoid gain scaling, target smoothing, and training with jitter. IEEE Transactions on Neural Networks 6(3), 529–538 (1995)

    Article  Google Scholar 

  20. Sequin, C.H., Clay, R.D.: Fault tolerance in feedforward artificial neural networks. Neural Networks 4, 111–141 (1991)

    Google Scholar 

  21. Sum, J., Leung, C.S., Ho, K.: On objective function, regularizer and prediction error of a learning algorithm for dealing with multiplicative weight noise. IEEE Transactions on Neural Networks (accepted for publication)

    Google Scholar 

  22. Takase, H., Kita, H., Hayashi, T.: A study on the simple penalty term to the error function from the viewpoint of fault tolearnt training. In: Proc. IJCNN 2004, pp. 1045–1050 (2004)

    Google Scholar 

  23. Tchernev, E.B., Mulvaney, R.G., Phatak, D.S.: Investigating the Fault Tolerance of Neural Networks. Neural Computation 17, 1646–1664 (2005)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sum, J., Leung, Cs., Ho, K. (2009). On Node-Fault-Injection Training of an RBF Network. In: Köppen, M., Kasabov, N., Coghill, G. (eds) Advances in Neuro-Information Processing. ICONIP 2008. Lecture Notes in Computer Science, vol 5507. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03040-6_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-03040-6_40

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-03039-0

  • Online ISBN: 978-3-642-03040-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics