Abstract
Regularization theory presents a sound framework to solving supervised learning problems. However, the regularization networks have a large size corresponding to the size of training data. In this work we study a relationship between network complexity, i.e. number of hidden units, and approximation and generalization ability. We propose an incremental hybrid learning algorithm that produces smaller networks with performance similar to original regularization networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Tikhonov, A., Arsenin, V.: Solutions of Ill-posed Problems. W.H. Winston, Washington (1977)
Poggio, T., Girosi, F.: A theory of networks for approximation and learning. Technical report, Cambridge, MA, USA (1989); A. I. Memo No. 1140, C.B.I.P. Paper No. 31
Poggio, T., Smale, S.: The mathematics of learning: Dealing with data. Notices of the AMS 50, 536–544 (2003)
Xu, L., Krzyżak, A., Yuille, A.: On radial basis function nets and kernel regression: statistical consistency, convergence rates, and receptive field size. Neural Netw. 7(4), 609–628 (1994)
Corradi, V., White, H.: Regularized neural networks: some convergence rate results. Neural Computation 7, 1225–1244 (1995)
Kůrková, V., Sanguineti, M.: Learning with generalization capability by kernel methods of bounded complexity. J. Complex 21(3), 350–367 (2005)
Vidnerová, P., Neruda, R.: Testing error estimates for regularization and radial function networks. In: Sun, F., Zhang, J., Tan, Y., Cao, J., Yu, W. (eds.) ISNN 2008, Part I. LNCS, vol. 5263, pp. 549–554. Springer, Heidelberg (2008)
Prechelt, L.: PROBEN1 – a set of benchmarks and benchmarking rules for neural network training algorithms. Technical Report 21/94, Universitaet Karlsruhe (November 1994)
Yao, X.: Evolving artificial neural networks. Proceedings of the IEEE 9(87), 1423–1447 (1999)
Stanley, K.O., D’Ambrosio, D., Gauci, J.: A hypercube-based indirect encoding for evolving large-scale neural networks. Artificial Life 15(2) (2009)
Neruda, R., Kudová, P.: Learning methods for radial basis functions networks. Future Generation Computer Systems 21, 1131–1142 (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Vidnerová, P., Neruda, R. (2010). Hybrid Learning of Regularization Neural Networks. In: Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds) Artifical Intelligence and Soft Computing. ICAISC 2010. Lecture Notes in Computer Science(), vol 6114. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13232-2_15
Download citation
DOI: https://doi.org/10.1007/978-3-642-13232-2_15
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-13231-5
Online ISBN: 978-3-642-13232-2
eBook Packages: Computer ScienceComputer Science (R0)