Skip to main content
Log in

Convergence Analysis of Batch Gradient Algorithm for Three Classes of Sigma-Pi Neural Networks

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Sigma-Pi (Σ-Π) neural networks (SPNNs) are known to provide more powerful mapping capability than traditional feed-forward neural networks. A unified convergence analysis for the batch gradient algorithm for SPNN learning is presented, covering three classes of SPNNs: Σ-Π-Σ, Σ-Σ-Π and Σ-Π-Σ-Π. The monotonicity of the error function in the iteration is also guaranteed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Abbreviations

SPNN:

Sigma-Pi neural network

References

  • Rumelhart DE and McClelland JL (1986). Parallel distributed processing, explorations in the microstructure of cognition. MIT Press, Cambridge

    Google Scholar 

  • Li JY and Yu YL (1995). The realization of arbitrary Boolean function by two layer higher-order neural network. J South China Univ Techn 23: 111–116

    Google Scholar 

  • Lenze B (2004). Note on interpolation on the hypercube by means of sigma-pi neural networks. Neurocomputing 61: 471–478

    Article  Google Scholar 

  • Bertsekas DP and Tsiklis J (1996). Neuro-dynamic programming. Athena Scientific, Boston, MA

    MATH  Google Scholar 

  • Kushner HJ and Yang J (1995). Analysis of adaptive step size SA algorithms for parameter rates. IEEE Transac Automat Control 40: 1403–1410

    Article  MATH  MathSciNet  Google Scholar 

  • Shao HM, Wu W, Li F and Zheng GF (2004). Convergence of gradient algorithm for feedforward neural network training. Proceed Int Symposium Comput Inform 2: 627–631

    Google Scholar 

  • Wu W, Feng G and Li X (2002). Training multilayer perceptrons via minimization of sum of ridge functions. Adv Computat Math 17: 331–347

    Article  MATH  MathSciNet  Google Scholar 

  • Liang YC (2002). Successive approximation training algorithm for feedforward neural networks. Neurocomputing 42: 311–322

    Article  MATH  Google Scholar 

  • Wu W, Feng G, Li Z and Xu Y (2005). Deterministic convergence of an online gradient method for BP neural networks. IEEE Transac Neural Networks 16: 533–540

    Article  Google Scholar 

  • Durbin R and Rumelhart D (1989). Product units: a computationally powerful and biologically pausible extension to backpropagation networks. Neural Comput 1: 133–142

    Article  Google Scholar 

  • Lenze B (1994). How to make sigma-pi neural networks perform perfectly on regular training sets. Neural Networks 7: 1285–1293

    Article  Google Scholar 

  • Heywood M and Noakes P (1995). A Framework for improved training of sigma-pi networks. IEEE Transac Neural Networks 6: 893–903

    Article  Google Scholar 

  • Yuan Y and Sun W (2001). Optimization theory and methods. Science Press, Beijing

    Google Scholar 

  • Wu W, Shao HM, Qu D (2005) Strong convergence for gradient methods for BP networks training. Proceedings of the International Conference on on Neural Networks and Brains (ICNNB’05), pp 332–334

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Wu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhang, C., Wu, W. & Xiong, Y. Convergence Analysis of Batch Gradient Algorithm for Three Classes of Sigma-Pi Neural Networks. Neural Process Lett 26, 177–189 (2007). https://doi.org/10.1007/s11063-007-9050-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-007-9050-0

Keywords

Mathematics Subject Classification (2000)

Navigation