Skip to main content

Neural Network Learning Using Low-Discrepancy Sequence

  • Conference paper
Advanced Topics in Artificial Intelligence (AI 1999)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1747))

Included in the following conference series:

Abstract

Backpropagation, (BP), is one of the most frequently used practical methods for supervised training of artificial neural networks. During the learning process, BP may get stuck in local minima, producing suboptimal solution, and thus limiting the effectiveness of the training. This work is dedicated to the problem of avoiding local minima and introduces a new technique for learning, which substitutes gradient descent algorithm in the BP with an optimization method for a global search in a multi-dimensional parameter (weight) space. For this purpose, a low-discrepancy LP τ sequence is used. The proposed method is discussed and tested with common benchmark problems at the end.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baldi, P., Hornik, K.: Learning in Linear Neural Networks: a Survey, IEEE Trans. Neural Networks 6 (1995) 837–858

    Article  Google Scholar 

  2. Bianchini, M., Gori, M.: Optimal Learning in Artificial Neural Networks: A Review of Theoretical Results. Neurocomputing 13 (1996) 313–346

    Article  Google Scholar 

  3. Blum, E.K.: Approximation of Boolean Functions by Sigmoidal Networks: Part I: XOR and Other Two-variable Functions. Neural Computation 1 (1989) 532–540

    Google Scholar 

  4. Bratley, P., Fox, B., Niederreiter, H.: Implementation and Tests of Low-discrepancy Sequences. ACM Trans. on Modeling and Computer Simulation 2 (1992) 195–213

    Article  MATH  Google Scholar 

  5. Cetin, B., Barhen, J., Burdick, J.: Terminal Repeller Unconstrained Subenergy Tunneling (TRUST) for Fast Global Optimization. J. Opt. Theory and Appl. 77 (1993) 97–126

    Article  MATH  MathSciNet  Google Scholar 

  6. Cetin, B., Burdick, J., Barhen, J.: Global Descent Replaces Gradient Descent to Avoid Local Minima Problem in Learning with Artificial Neural Networks. In: Proc. of the IEEE Conf. on Neural Networks, Vol. 2. (1993) 836–842

    Article  Google Scholar 

  7. Chen, A.M., Lu, H., Nielsen, R.H.: On the Geometry of Feedforward Neural Network Error Surfaces. Neural Computation 5 (1993) 910–927

    Google Scholar 

  8. Gori, M., Tesi A.: On the Problem of Local Minima in Backpropagation. IEEE Trans. on Pattern Analysis and Machine Intelligence 14 (1992) 76–85

    Article  Google Scholar 

  9. Gustafson, K.: Internal Sigmoid Dynamics in Feedforward Neural Networks. Connection Science 10 (1998) 43–73

    Article  Google Scholar 

  10. Hamey, L.G.: XOR Has no Local Minima: A Case Study in Neural Network Error Surface Analysis. Neural Networks 11 (1998) 669–681

    Article  Google Scholar 

  11. Hush, D.R., Horne, B., Salas, J.: Error Surfaces for Multilayer Perceptrons. IEEE Trans. Systems, Man, and Cybernetics 22 (1992) 1152–1161

    Article  Google Scholar 

  12. Kolen, J.F., Pollack, J.B.: Backpropagation is Sensitive to Initial Conditions. Complex Systems 4 (1990) 269–280

    MATH  Google Scholar 

  13. Lisboa, P.G., Perantonis, S.J.: Complete Solution of the Local Minima in the XOR Problem. Network 2 (1991) 119–124

    Article  MATH  MathSciNet  Google Scholar 

  14. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representation by Error Propagation. In: Parallel Distr. Proc., Vol. 1. Cambridge, MA: MIT Press (1986) 318–362

    Google Scholar 

  15. Smagt, P.: Minimization Methods for Training Feedforward Neural Networks. Neural Networks 7 (1994) 1–11

    Article  Google Scholar 

  16. Sobol’, I.M., Statnikov, P.B.: Choosing Optimal Parameters in Multicriteria Problems. Nauka, Moscow (1981)

    Google Scholar 

  17. Sontag, E.D., Sussmann, H. J.: Back Propagation Separates Where Perceptrons Do. Neural Networks 4 (1991) 243–249

    Article  Google Scholar 

  18. Sprinkhuizen-Kuyper, I.G., Boers, E.J.: The Error Surface of the Simplest XOR Network Has Only Global Minima. Neural Computation 8 (1996) 1301–1320

    Article  Google Scholar 

  19. Yu, X.H., Chen, G.A.: On the Local Minima Free Condition of Backpropagation Learning. IEEE Trans. Neural Networks 6 (1995) 1300–1303

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Jordanov, I., Brown, R. (1999). Neural Network Learning Using Low-Discrepancy Sequence. In: Foo, N. (eds) Advanced Topics in Artificial Intelligence. AI 1999. Lecture Notes in Computer Science(), vol 1747. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46695-9_22

Download citation

  • DOI: https://doi.org/10.1007/3-540-46695-9_22

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66822-0

  • Online ISBN: 978-3-540-46695-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics