Skip to main content

Soft Committee Machine Using Simple Derivative Term

  • Conference paper
Artificial Intelligence and Soft Computing (ICAISC 2014)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8467))

Included in the following conference series:

  • 2357 Accesses

Abstract

In on-line gradient descent learning, the local property of the derivative of the output function can cause slow convergence. This phenomenon, called a plateau, occurs in the learning process of the multilayer network. Improving the derivative term, we employ the proposed method replacing the derivative term with a constant that greatly increases the relaxation speed. Moreover, we replace the derivative term with the 2nd order of expansion of the derivative, and it beaks a plateau faster than the original method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Biehl, M., Schwarze, H.: Learning by on-line gradient descent. Journal of Physics A: Mathematical and General Physics 28, 643–656 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  2. Saad, D., Solla, S.A.: On-line learning in soft-committee machines. Physical Review E 52, 4225–4243 (1995)

    Article  Google Scholar 

  3. Fukumizu, K.: A Regularity Condition of the Information Matrix of a Multilayer Perceptron Network. Neural Networks 9(5), 871–879 (1996)

    Article  Google Scholar 

  4. Rattray, M., Saad, D.: Incorporating Curvature Information into On-line learning. In: Saad, D. (ed.) On-line Learning in Neural Networks, pp. 183–207. Cambridge University Press, Cambridge (1998)

    Google Scholar 

  5. Amari, S.: Natural gradient works efficiently in learning. Neural Computation 10, 251–276 (1998)

    Article  Google Scholar 

  6. Fahlman, S.E.: An Empirical Study of Learning Speed in Back-Propagation Networks, CMU-CS-88-162 (1988)

    Google Scholar 

  7. Hara, K., Katahira, K., Okanoya, K., Okada, M.: Theoretical Analysis of Function of Derivative Term in On-Line Gradient Descent Learning. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli, F., Palm, G. (eds.) ICANN 2012, Part II. LNCS, vol. 7553, pp. 9–16. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  8. Williams, C.K.I.: Computation with Infinite Neural Networks. Neural Computation 10, 1203–1216 (1998)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Hara, K., Katahira, K. (2014). Soft Committee Machine Using Simple Derivative Term. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2014. Lecture Notes in Computer Science(), vol 8467. Springer, Cham. https://doi.org/10.1007/978-3-319-07173-2_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-07173-2_6

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-07172-5

  • Online ISBN: 978-3-319-07173-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics