Skip to main content
Log in

A New Learning Algorithm Using Simultaneous Perturbation with Weight Initialization

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

A new learning algorithm is proposed for training single hidden layer feedforward neural network. In each epoch, the connection weights are updated by simultaneous perturbation. Tunneling using perturbation technique is applied to detrap the local minima. The proposed technique is shown to give better convergence results for the selected problems, namely neuro-controller, XOR, L-T character recognition, two spirals, simple interaction function, harmonic function and complicated interaction function.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Cybenko, G.: Approximation by superpositions of a sigmoidal function, Mathematical Control Signal Systems, 2 (1989), 303–314.

    MATH  MathSciNet  Google Scholar 

  2. Chowdhury, P. R., Singh, Y. P. and Chansarkar, R. A.: Dynamic tunneling technique for efficient training of multiplayer perceptrons, IEEE Trans. on Neural Networks, 10 (1999), 48–55.

    Article  Google Scholar 

  3. Hinton, G. E.: How neural networks learn from experience, Sc. Amer., (1992), 145–151.

  4. Hwang, J., You, S., Lay, S. and Jou, I.: The cascade-correlation learning: a projection pursuit learning perspective, IEEE Trans. Neural Networks, 7 (1996), 278–289.

    Article  Google Scholar 

  5. Kamarthi, S. V. and Pittner, S.: Accelerating neural network training using weight extrapolations, Neural Networks, 12 (1999), 1285–1299.

    Article  Google Scholar 

  6. Krogh, A. and Hertz, J.: Generalization in a linear perceptron in the present of noise, J. Physics. A:Math. Gen., 25 (1992), 1135–1147.

    Article  MATH  MathSciNet  ADS  Google Scholar 

  7. Kushner, H.: Asymptotic global behavior for stochastic approximation and diffusions with slowly decreasing noise effects: Global minimization via Monte Carlo, SIAM J. Appl. Math., 47 (1987).

  8. Kwok, T. and Yeung, Y.: Objective functions for training new hidden units in constructive neural networks, IEEE Trans. Neural Networks, 8 (1997), 1131–1147.

    Article  Google Scholar 

  9. Lehtokangas, M.: Fast initialization for cascade-correlation learning, IEEE Trans. on Neural Networks, 10 (1999), 1285–1299.

    Article  Google Scholar 

  10. Maeda, Y. and De Figueiredo, R. J. P.: Learning rules for neuro-controller via simultaneous perturbation, IEEE Trans. on Neural Networks, 8 (1997), 1119–1130.

    Article  Google Scholar 

  11. Morgan, N. and Bourland, H. A.: Neural networks for statistical recognition of continuous speech, Proc. IEEE, 83, 741–770.

  12. Riedmiller, M. and Braun, H.: A direct adaptive method for faster backpropagation learning: the RPROP algorithm, Proc. Int. Conference on Neural Networks, San Franscisco, CA, 1 (1993) 586–591.

    Google Scholar 

  13. Riedmiller, M.: Advanced supervised learning in multiplayer perceptrons-from backpropagation to adaptive learning algorithm, Computer Standards and Interface, 16 (1994).

  14. Rognvaldsson, T.: On langevin updating in multiplayer perceptrons, Neural Comput., 6 (1994), 916–926.

    Google Scholar 

  15. Spall, J. C.: A stochastic approximation technique for generating maximum likelihood parameter estimates, Proceedings of the American control conference 1987, pp.1161–1167.

  16. Spall, J. C.: Multivariate stochastic approximation using a simultaneous perturbation gradient approximation, IEEE Trans. Automat. Contr., 37 (1992), 332–341.

    Article  MATH  MathSciNet  Google Scholar 

  17. Thangavel, P. and Kathirvalavakumar, T.: Training feedforward networks using simultaneous perturbation with dynamic tunneling, Neurocomputing, to appear.

  18. Wessels, L. F. A. and Barnard, E.: Avoiding false local minima by proper initialization of connections, IEEE Trans. on Neural Networks, 3 (1992), 899–905.

    Article  Google Scholar 

  19. Weymaere, N. and Martens, J. P.: On the initialization and optimization of multilayer perceptrons, IEEE Trans. on Neural Networks, 5 (1994), 738–751.

    Article  Google Scholar 

  20. Yam, Y. F. and Chow, T. W. S.: Determining initial weights of feedforward neural networks based on least squares method, Neural Process. Lett., 2 (1995), 13–17.

    Article  Google Scholar 

  21. Yam, Y. F. and Chow, T. W. S.: A new method in determining the initial weights of feedforward neural networks, Neurocomputing 16 (1997), 23–32.

    Article  Google Scholar 

  22. Zhang, J. and Morris, A. J.: A sequential learning approach for single hidden layer neural networks, Neural Networks, 11 (1998), 65–80.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Kathirvalavakumar, T., Thangavel, P. A New Learning Algorithm Using Simultaneous Perturbation with Weight Initialization. Neural Processing Letters 17, 55–68 (2003). https://doi.org/10.1023/A:1022919300793

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1022919300793

Navigation