Skip to main content

Back Propagation with Randomized Cost Function for Training Neural Networks

  • Conference paper
  • First Online:
  • 531 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2639))

Abstract

A novel method to improve both the generalization and convergence performance of the back propagation algorithm (BP) by using multiple cost functions with a randomizing scheme is proposed in this paper. Under certain conditions, the randomized technique will converge to the global minimum with probability one. Experimental results on benchmark Encoder-Decoder problems and the NC2 classification problem show that the method is effective in enhancing BP’s convergence and generalization performance.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baldi. P.(1995) “Gradient descent learning algorithm overview: a general dynamical systems perspective” IEEE Trans. Networks Vol. 6, pp. 182–195

    Google Scholar 

  2. Solla, S.A., Levin, E. and Fleisher, M.(1988) “Accelerated learning in layered neural networks”, Complex Systes, Vol. 2, PP. 625–640

    MATH  MathSciNet  Google Scholar 

  3. M. Joost and W. Schiffmann. (1998) “Speeding up back propagation algorithms by using cross-entropy combined with pattern normalization”, International Journal of Uncertainty, Fuzziness and Knowledge based systems (IJUFKS), 6(2):117–126

    Article  MATH  Google Scholar 

  4. Jacobs; R.A.(1988) “Increased rates of convergence through learning rate adaptation”, Neural Networks, Vol. 1, p. 295–307

    Google Scholar 

  5. M. Reidmiller and H. Braun. (1993) “A Direct adaptive method for faster back propagation learning: The RPROP algorithm”, In the Proceedings of the IEEE International Conference on Neura Network. IEEE Press

    Google Scholar 

  6. Haykin, S.(1999) Neural Networks: A Comprehensive Foundation, Macmillan, Prentice Hall, Second Edition

    Google Scholar 

  7. H. Babri and K. Ahsan, (2002) “Randomized Cost Functions in Back Propagation Learning”, Technical Report CS-002-018, CS Department, LUMS.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Babri, H.A., Chen, Y.Q., Ahsan, K. (2003). Back Propagation with Randomized Cost Function for Training Neural Networks. In: Wang, G., Liu, Q., Yao, Y., Skowron, A. (eds) Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing. RSFDGrC 2003. Lecture Notes in Computer Science(), vol 2639. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-39205-X_80

Download citation

  • DOI: https://doi.org/10.1007/3-540-39205-X_80

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-14040-5

  • Online ISBN: 978-3-540-39205-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics