Skip to main content

Regularization learning of neural networks for generalization

  • Technical Papers
  • Conference paper
  • First Online:
Algorithmic Learning Theory (ALT 1992)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 743))

Included in the following conference series:

Abstract

In this paper, we propose a learning method of neural networks based on the regularization method and analyze its generalization capability. In learning from examples, training samples are independently drawn from some unknown probability distribution. The goal of learning is minimizing the expected risk for future test samples, which are also drawn from the same distribution. The problem can be reduced to estimating the probability distribution with only samples, but it is generally ill-posed. In order to solve it stably, we use the regularization method. Regularization learning can be done in practice by increasing samples by adding appropriate amount of noise to the training samples. We estimate its generalization error, which is defined as a difference between the expected risk accomplished by the learning and the truly minimum expected risk. Assume p-dimensional density function is s-times differentiable for any variable. We show the mean square of the generalization error of regularization learning is given as Dn −2s/(2ss+p) where n is the number of samples and D is a constant dependent on the complexity of the neural network and the difficulty of the problem.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. E.B. Baum and D. Haussler: What size net gives valid generalization? Neural Computation, Vol. 1, pp. 151–160, 1989.

    Google Scholar 

  2. A. Blumer, A. Ehrenfeucht, D. Haussler, and M.K. Warmuth: Learnability and the Vapnik-Chervonenkis dimension. J. of the Assoc. for Comp. Machinery, pp. 929–965, 1989.

    Google Scholar 

  3. P. Craven and G. Wahba: Smoothing noisy data with spline functions. Numerische Mathematik, Vol. 31, pp. 377–403, 1979.

    Google Scholar 

  4. K. Hornik, M. Stinchcombe, and H. White: Multilayer feedforward networks are universal approximators. Neural Networks, Vol. 2, pp. 359–366, 1989.

    Google Scholar 

  5. T. Kurita: An attempt on model selection for neural networks. In IEICE Technical Report PRU89-16, 1989. In Japanese.

    Google Scholar 

  6. E.A. Nadaraya: Nonparametric estimation of probability densities and regression curves. Kluwer Academic Publishers, 1989.

    Google Scholar 

  7. T. Poggio: Networks for approximation and learning. Proc. IEEE, Vol. 78, No. 9, pp. 1481–1496, 1990.

    Google Scholar 

  8. A.N. Tikhonov and V.Ya. Arsenin: Solutions of Ill-posed Problems. Winston, Washington, 1977.

    Google Scholar 

  9. V.A. Vapnik: Estimation of Dependences Based on Empirical Data. Springer-Verlag, 1984.

    Google Scholar 

  10. K. Yamanishi: Learning non-parametric-densities using finite-dimensional parametric hypotheses. In Proc. of ALT '91, pp. 175–186, 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Shuji Doshita Koichi Furukawa Klaus P. Jantke Toyaki Nishida

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Akaho, S. (1993). Regularization learning of neural networks for generalization. In: Doshita, S., Furukawa, K., Jantke, K.P., Nishida, T. (eds) Algorithmic Learning Theory. ALT 1992. Lecture Notes in Computer Science, vol 743. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57369-0_31

Download citation

  • DOI: https://doi.org/10.1007/3-540-57369-0_31

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-57369-2

  • Online ISBN: 978-3-540-48093-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics