Skip to main content

Dual Gradient Descent Algorithm on Two-Layered Feed-Forward Artificial Neural Networks

  • Conference paper
New Trends in Applied Artificial Intelligence (IEA/AIE 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4570))

  • 1402 Accesses

Abstract

The learning algorithms of multilayered feed-forward networks can be classified into two categories, gradient and non-gradient kinds. The gradient descent algorithms like backpropagation (BP) or its variations are widely used in many application areas because of convenience. However, the most serious problem associated with the BP is local minima problem. We propose an improved gradient descent algorithm intended to weaken the local minima problem without doing any harm to simplicity of the gradient descent method. This algorithm is called dual gradient learning algorithm in which the upper connections (hidden-to-output) and the lower connections (input-to-hidden) separately evaluated and trained. To do so, the target values of hidden layer units are introduced to be used as evaluation criteria of the lower connections. Simulations on some benchmark problems and a real classification task have been performed to demonstrate the validity of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cetin 1, B., Barhen, J.: Global descent replaces gradient descent to avoid local minima problem in learning with ANN. In: Proc. of lEEE Conf. on NN, vol. 2, pp. 836–842 (1993)

    Google Scholar 

  2. Hont, R., Padalos, P. (eds.): Handbook of Global Optimization. Kluwer, Dordrecht (1995)

    Google Scholar 

  3. Plagianakos, P., Magoulas, G.D., Vrahatis, M.: Deterministic non-monotone strategies for the effecive training of multilayer perceptrons. IEEE Trans. on N. Networks 13(6), 1268–1284 (2002)

    Article  Google Scholar 

  4. Jones, D., Perttunen, C., Stuckman, B.: Lipschitzian Optimization without the Lipschitz Constant. J. of Optimization Theory and Applications 79, 157–181 (1993)

    Article  MATH  Google Scholar 

  5. Vrahatis, M., Androulakis, G., Lambrinos, J., Magoulas, G.: A class of gradient unconstrained minimization stepsize. J. of Computational and Applied Mathematics 114, 367–386 (2000)

    Article  MATH  Google Scholar 

  6. Jordanov, I.N., Rafik, T.A.: Local Minima Free Network Learning. In: Second lEEE International Conference On Intelligent Systems, pp. 34–39 (June 2004)

    Google Scholar 

  7. Bilbro, G.: Fast stochastic global optimisation. IEEE Trans. On Systems, Man, and Cybernetics 24, 684–689 (1994)

    Article  Google Scholar 

  8. Huyer, W., Neumaier, A.: Global Optimization by Multilevel Coordinate Search. J. of Gl. Optimizalion 14, 331–335 (1999)

    Article  MATH  Google Scholar 

  9. Tom, A., Vitanen, S.: Topographical Global Optimization Using Pre-Sampled Points. J. of Global Optimization 5, 267–276 (1994)

    Article  Google Scholar 

  10. Ng, S.C., Leung, S.H., Luk, A.: A Hybrid Algorithm of Weight Evolution and Generalized Back-propagation for finding Global Minimum. In: Proceedings of IEEE International Joint Conference on Neural Networks (IJCNN’99), IEEE Computer Society Press, Los Alamitos (1999)

    Google Scholar 

  11. Yu, X.H.: Can backpropagation error surface not have local minima. IEEE Transactions on Neural Networks 3(6), 1019–1021 (1992)

    Article  Google Scholar 

  12. Yu, X.H., Chen, G.A.: On the local minima free condition of backpropagation learning. IEEE Transactions on Neural Networks 6(5), 1300–1303 (1995)

    Article  Google Scholar 

  13. Wang, C., Principe, J.C.: Training neural networks with additive noise in the desired signal. IEEE Trans. Neural Networks 10(6), 1511–1517 (1999)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Hiroshi G. Okuno Moonis Ali

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer Berlin Heidelberg

About this paper

Cite this paper

Choi, B., Lee, JH., Park, TS. (2007). Dual Gradient Descent Algorithm on Two-Layered Feed-Forward Artificial Neural Networks. In: Okuno, H.G., Ali, M. (eds) New Trends in Applied Artificial Intelligence. IEA/AIE 2007. Lecture Notes in Computer Science(), vol 4570. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73325-6_69

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-73325-6_69

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-73322-5

  • Online ISBN: 978-3-540-73325-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics