Skip to main content
Log in

Improving the Performance of Feedforward Neural Networks by Noise Injection into Hidden Neurons

  • Published:
Journal of Intelligent and Robotic Systems Aims and scope Submit manuscript

Abstract

The generalization ability of feedforward neural networks (NNs) depends on the size of training set and the feature of the training patterns. Theoretically the best classification property is obtained if all possible patterns are used to train the network, which is practically impossible. In this paper a new noise injection technique is proposed, that is noise injection into the hidden neurons at the summation level. Assuming that the test patterns are drawn from the same population used to generate the training set, we show that noise injection into hidden neurons is equivalent to training with noisy input patterns (i.e., larger training set). The simulation results indicate that the networks trained with the proposed technique and the networks trained with noisy input patterns have almost the same generalization and fault tolerance abilities. The learning time required by the proposed method is considerably less than that required by the training with noisy input patterns, and it is almost the same as that required by the standard backpropagation using normal input patterns.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bishop, C. M.: Training with noise is equivalent to Tikhnov regularization, Neural Computation 7(1) (1995), 108–116.

    Google Scholar 

  2. Bolt, G.: Fault models for artificial neural networks, in: Digest IJCNN, 1991, pp. 1373–1378.

  3. Emmerson, M. D. and Damper, R. I.: Determining and improving the fault tolerance of multilayer perceptions in a pattern-recognition application, IEEE Trans. Neural Networks 4(5) (1993), 788–793.

    Google Scholar 

  4. Hammadi, N. C. and Ito, H.: On the activation function and fault tolerance in feedforward neural networks, in: Proc. of Int. Workshop on Dependability in Advanced Computing Paradigms, June 1996, pp. 29–34.

  5. Hammadi, N. C. and Ito, H.: A learning algorithm for fault tolerant feedforward neural networks, IEICE Trans. Information and Systems E80-D(1) (1997), pp. 21–27.

    Google Scholar 

  6. Ito, H. and Yagi, T.: Fault tolerant design using error correcting code for multilayer neural networks, in: IEEE Int. Workshop on Defect and Fault Tolerance in VLSI Systems, 1994, pp. 177–184.

  7. Jean, J. S. N. and Wang, J.: Weight smoothing to improve network generalization, IEEE Trans. Neural Networks 5(5) (1994), 752–763.

    Google Scholar 

  8. Merchawi, N. S., Kumara, S. T., and Das, C. R.: A probabilistic model for the fault tolerance of multilayer perceptrons, IEEE Trans. Neural Networks 7(1) (1996), 201–205.

    Google Scholar 

  9. Murray, A. F. and Edwards, P. J.: Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training, IEEE Trans. Neural Networks 5(5) (1994), 792–802.

    Google Scholar 

  10. Nijhuis, J., Hofflinger, B., Schaik, A., and Spaanenburg, L.: Limits to fault-tolerance of a feedforward neural network with learning, Digest of Fault Tolerant Computing Symposium, June 1990, pp. 228–235. JINTCT1.tex; 18/12/1997; 15:05; v.7; p.12

  11. Phatak, D. S. and Koren, I.: Complete and partial fault tolerance of feedforward neural nets, IEEE Trans. Neural Networks 6(2) (1995), 446–456.

    Google Scholar 

  12. Plumbely, M. D.: Toward optimal learning from very small data sets, Technical Report 94/09, Dept. of Computer Science, King's College London, UK.

  13. Prechelt, L.: PROBEN1 – A set of neural network benchmark problems and benchmarking rules, Technical Report 21/94, Univ. of Karlsruhe, Karlsruhe, German.

  14. Wessels, L. F. A. and Barnard, E.: Avoiding false local minima by proper initialization, IEEE Trans. Neural Networks 3(6) (1992), 899–905. JINTCT1.tex; 18/12/1997; 15:05; v.7; p.13

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hammadi, N.C., Ito, H. Improving the Performance of Feedforward Neural Networks by Noise Injection into Hidden Neurons. Journal of Intelligent and Robotic Systems 21, 103–115 (1998). https://doi.org/10.1023/A:1007965819848

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1007965819848

Navigation