Skip to main content
Log in

Obtaining Fault Tolerant Multilayer Perceptrons Using an Explicit Regularization

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

When the learning algorithm is applied to a MLP structure, different solutions for the weight values can be obtained if the parameters of the applied rule or the initial conditions are changed. Those solutions can present similar performance with respect to learning, but they differ in other aspects, in particular, fault tolerance against weight perturbations. In this paper, a backpropagation algorithm that maximizes fault tolerance is proposed. The algorithm presented explicitly adds a new term to the backpropagation learning rule related to the mean square error degradation in the presence of weight deviations in order to minimize this degradation. The results obtained demonstrate the efficiency of the learning rule proposed here in comparison with other algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Segee, B. E. and Carter, M. J.: Comparative fault tolerance of parallel distributed processing networks. IEEE Trans. on Computers 43(11) (1994), 1323–1329.

    Google Scholar 

  2. Pathak, D. S. and Koren, I.: Complete and partial fault tolerance of feedforward neural nets. IEEE Trans. on Neural Networks 6(2) (1995), 446–456.

    Google Scholar 

  3. Edwards, P. J. and Murray, A. F.: Towards optimally distributed computation. Neural Computation 10 (1998), 997–1015.

    Google Scholar 

  4. Bernier, J. L., Ortega, J., Rodríguez, M. M., Rojas, I. and Prieto, A.: An accurate measure for multilayer perceptron tolerance to weight deviations. Neural Processing Letters 10(2) (1999), 121–130.

    Google Scholar 

  5. Mao, J. and Jain, A. K.: Regularization techniques in artificial neural networks. World Congress on Neural Networks, 1993, pp. 75–79.

  6. Anza Plus. Users Guide and Neurosoftware Documents. HNC Neurosoftware, HNC Inc., June 1989.

  7. Choi, J. Y. and Choi, C.: Sensitivity analysis of multilayer perceptron with differentiable activation functions. IEEE Trans. on Neural Networks 3(1) (1992), 101–107.

    Google Scholar 

  8. Edwards, P. J. and Murray, A. F.: Can deterministic penalty terms model the effects of synaptic weight noise on network fault-tolerance? Int. J. Neural Systems 6(4) (1995), 401–416.

    Google Scholar 

  9. Prechelt, L.: PROBEN1-A Set of Neural Network Benchmark Problems and Benchmarking Rules. Technical Report 21/94. Universität Karlsruhe, Germany, September 1994.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bernier, J.L., Ortega, J., Rojas, I. et al. Obtaining Fault Tolerant Multilayer Perceptrons Using an Explicit Regularization. Neural Processing Letters 12, 107–113 (2000). https://doi.org/10.1023/A:1009698206772

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1009698206772

Navigation