Skip to main content

Dynamics of Gradient-Based Learning and Applications to Hyperparameter Estimation

  • Conference paper
Book cover Intelligent Data Engineering and Automated Learning (IDEAL 2003)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2690))

  • 938 Accesses

Abstract

We analyse the dynamics of gradient-based learning algorithms using the cavity method, considering the cases of batch learning with non-vanishing rates, and on-line learning. It has an an excellent agreement with simulations. Applications to efficient and precise estimation of hyperparameters are proposed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Saad, D. (ed.): On-line learning in Neural Networks. Cambridge University Press, Cambridge (1998)

    MATH  Google Scholar 

  2. Wong, K.Y.M., Li, S., Tong, Y.W.: Many-body approach to the dynamics of batch learning. Phys. Rev. E 62, 4036–4042 (2000)

    Article  Google Scholar 

  3. Luo, P., Wong, K.Y.M.: Dynamical and stationary properties of on-line learning from finite training sets. Phys. Rev. E 67, 11906 (2003) (to appear)

    Google Scholar 

  4. Bös, S., Opper, M.: Dynamics of batch learning in a perceptron. J. Phys. A 31, 4835–4850 (1998)

    Article  MATH  Google Scholar 

  5. Barber, D., Sollich, P.: Online learning from finite training sets, in [1], 279– 302 (1998)

    Google Scholar 

  6. Heimel, J.A.F., Coolen, A.C.C.: Supervised learning with restricted training sets: a generating functional analysis. J. Phys. A: Math. Gen. 34, 9009–9026 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  7. Amari, S., Murata, N., Müller, K.-R., Finke, M., Yang, H.H.: Asymptotic statisitcal theory of overtraining and cross-validation. IEEE Trans. on Neural Networks 8, 985–996 (1997)

    Article  Google Scholar 

  8. Wong, K.Y.M., Li, F.: Fast parameter estimation using Green’s functions. In: Dieterrich, T.G., Becker, S., Ghahramani, Z. (eds.) Advances in Neural Information Processing Systems 14. MIT Press, Cambridge (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wong, K.Y.M., Luo, P., Li, F. (2003). Dynamics of Gradient-Based Learning and Applications to Hyperparameter Estimation. In: Liu, J., Cheung, Ym., Yin, H. (eds) Intelligent Data Engineering and Automated Learning. IDEAL 2003. Lecture Notes in Computer Science, vol 2690. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45080-1_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-45080-1_48

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40550-4

  • Online ISBN: 978-3-540-45080-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics