Skip to main content

Learning from Data as an Optimization and Inverse Problem

  • Conference paper

Part of the book series: Studies in Computational Intelligence ((SCI,volume 399))

Abstract

Learning form data is investigated as minimization of empirical error functional in spaces of continuous functions and spaces defined by kernels. Using methods from theory of inverse problems, an alternative proof of Representer Theorem is given. Regularized and non regularized minimization of empirical error is compared.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Fine, T.L.: Feedforward Neural Network Methodology. Springer, Heidelberg (1999)

    MATH  Google Scholar 

  2. Kecman, V.: Learning and Soft Computing. MIT Press, Cambridge (2001)

    MATH  Google Scholar 

  3. Ito, Y.: Finite mapping by neural networks and truth functions. Mathematical Scientist 17, 69–77 (1992)

    MathSciNet  MATH  Google Scholar 

  4. Michelli, C.A.: Interpolation of scattered data: Distance matrices and conditionally positive definite functions. Constructive Approximation 2, 11–22 (1986)

    Article  MathSciNet  Google Scholar 

  5. Girosi, F., Poggio, T.: Regularization algorithms for learning that are equivalent to multilayer networks. Science 247(4945), 978–982 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  6. Girosi, F., Jones, M., Poggio, T.: Regularization theory and neural networks architectures. Neural Computation 7, 219–269 (1995)

    Article  Google Scholar 

  7. Bishop, C.: Training with noise is equivalent to Tikhonov regularization. Neural Computation 7(1), 108–116 (1995)

    Article  Google Scholar 

  8. Girosi, F.: An equivalence between sparse approximation and support vector machines. Neural Computation 10, 1455–1480 (1998)

    Article  Google Scholar 

  9. Aronszajn, N.: Theory of reproducing kernels. Transactions of AMS 68, 337–404 (1950)

    Article  MathSciNet  MATH  Google Scholar 

  10. Parzen, E.: An approach to time series analysis. Annals of Math. Statistics 32, 951–989 (1966)

    Article  MATH  Google Scholar 

  11. Wahba, G.: Splines Models for Observational Data. SIAM, Philadelphia (1990)

    Book  MATH  Google Scholar 

  12. Boser, B.E., Guyon, I.M., Vapnik, V.: A training algorithms for optimal margin classifiers. In: Haussler, D. (ed.) Proceedings of the 5th Annual ACM Workshop on Computational Learlung Theory, pp. 144–152. ACM Press, Pittsburg (1992)

    Google Scholar 

  13. Cortes, C., Vapnik, V.: Support vector networks. Machine Learning 20, 273–297 (1995)

    Article  MATH  Google Scholar 

  14. Schölkopf, B., Smola, A.J.: Learning with Kernels – Support Vector Machines, Regularization, Optimization and Beyond. MIT Press, Cambridge (2002)

    Google Scholar 

  15. Cucker, F., Smale, S.: On the mathematical foundations of learning. Bulletin of AMS 39, 1–49 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  16. Hansen, P.C.: Rank-Deficient and Discrete Ill-Posed Problems. SIAM, Philadelphia (1998)

    Book  Google Scholar 

  17. Bertero, M.: Linear inverse and ill-posed problems. Advances in Electronics and Electron Physics 75, 1–120 (1989)

    Article  Google Scholar 

  18. Engl, E.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems. Kluwer, Dordrecht (1999)

    MATH  Google Scholar 

  19. Kůrková, V.: Learning from data as an inverse problem. In: Antoch, J. (ed.) COMPSTAT 2004 - Proceedings on Computational Statistics, pp. 1377–1384. Physica-Verlag/Springer, Heidelberg (2004)

    Google Scholar 

  20. Kůrková, V.: Neural network learning as an inverse problem. Logic Journal of IGPL 13, 551–559 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  21. Vito, E.D., Rosasco, L., Caponnetto, A., Giovannini, U.D., Odone, F.: Learning from examples as an inverse problem. Journal of Machine Learning Research 6, 883–904 (2005)

    MathSciNet  MATH  Google Scholar 

  22. Friedman, A.: Modern Analysis. Dover, New York (1982)

    MATH  Google Scholar 

  23. Tikhonov, A.N., Arsenin, V.Y.: Solutions of Ill-posed Problems. W.H. Winston, Washington (1977)

    MATH  Google Scholar 

  24. Moore, E.H.: Abstract. Bulletin of AMS 26, 394–395 (1920)

    Google Scholar 

  25. Penrose, R.: A generalized inverse for matrices. Proceedings of Cambridge Philosophical Society 51, 406–413 (1955)

    Article  MathSciNet  MATH  Google Scholar 

  26. Groetch, C.W.: Generalized Inverses of Linear Operators. Dekker, New York (1977)

    Google Scholar 

  27. Loustau, S.: Aggregation of SVM classifiers using Sobolev spaces. Journal of Machine Learning Research 9, 1559–1582 (2008)

    MathSciNet  MATH  Google Scholar 

  28. Poggio, T., Smale, S.: The mathematics of learning: dealing with data. Notices of AMS 50, 537–544 (2003)

    MathSciNet  MATH  Google Scholar 

  29. Kůrková, V., Sanguineti, M.: Error estimates for approximate optimization by the extended Ritz method. SIAM Journal on Optimization 15, 461–487 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  30. Kůrková, V., Sanguineti, M.: Learning with generalization capability by kernel methods with bounded complexity. Journal of Complexity 13, 551–559 (2005)

    MathSciNet  MATH  Google Scholar 

  31. Steinwart, I., Christmann, A.: Support Vector Machines. Springer, New York (2008)

    MATH  Google Scholar 

  32. Mhaskar, H.N.: Versatile Gaussian networks. In: Proceedings of IEEE Workshop of Nonlinear Image Processing, pp. 70–73 (1995)

    Google Scholar 

  33. Kůrková, V., Neruda, R.: Uniqueness of functional representations by Gaussian basis function networks. In: Proceedings of ICANN 1994, pp. 471–474. Springer, London (1994)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Věra Kůrková .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag GmbH Berlin Heidelberg

About this paper

Cite this paper

Kůrková, V. (2012). Learning from Data as an Optimization and Inverse Problem. In: Madani, K., Dourado Correia, A., Rosa, A., Filipe, J. (eds) Computational Intelligence. IJCCI 2010. Studies in Computational Intelligence, vol 399. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-27534-0_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-27534-0_24

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-27533-3

  • Online ISBN: 978-3-642-27534-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics