Skip to main content

The Use of Stability Principle for Kernel Determination in Relevance Vector Machines

  • Conference paper
Neural Information Processing (ICONIP 2006)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4232))

Included in the following conference series:

  • 996 Accesses

Abstract

The task of RBF kernel selection in Relevance Vector Machines (RVM) is considered. RVM exploits a probabilistic Bayesian learning framework offering number of advantages to state-of-the-art Support Vector Machines. In particular RVM effectively avoids determination of regularization coefficient C via evidence maximization. In the paper we show that RBF kernel selection in Bayesian framework requires extension of algorithmic model. In new model integration over posterior probability becomes intractable. Therefore point estimation of posterior probability is used. In RVM evidence value is calculated via Laplace approximation. However, extended model doesn’t allow maximization of posterior probability as dimension of optimization parameters space becomes too high. Hence Laplace approximation can be no more used in new model. We propose a local evidence estimation method which establishes a compromise between accuracy and stability of algorithm. In the paper we first briefly describe maximal evidence principle, present model of kernel algorithms as well as our approximations for evidence estimation, and then give results of experimental evaluation. Both classification and regression cases are considered.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Burges, C.J.S.: A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery 2, 121–167 (1998)

    Article  Google Scholar 

  2. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (1995)

    MATH  Google Scholar 

  3. MacKay, D.J.C.: Information Theory, Inference, and Learning Algorithms. Cambridge University Press, Cambridge (2003)

    MATH  Google Scholar 

  4. Tipping, M.E.: Sparse Bayesian Learning and the Relevance VectorMachines. Journal of Machine Learning Research 1, 211–244 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  5. Murphy, P.M., Aha, D.W.: UCI Repository of Machine Learning Databases (Machine Readable Data Repository). Univ. of California, Dept. of Information and Computer Science, Irvine, Calif (1996)

    Google Scholar 

  6. Ayat, N.E., Cheriet, M., Suen, C.Y.: Optimization of SVM Kernels using an Empirical Error Minimization Scheme. In: Lee, S.-W., Verri, A. (eds.) SVM 2002. LNCS, vol. 2388, p. 354. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  7. Friedrichs, F., Igel, C.: Evolutionary Tuning of Multiple SVM Parameters. Neurocomputing 64, 107–117 (2005)

    Article  Google Scholar 

  8. Gold, C., Sollich, P.: Model Selection for Support Vector Machine Classification. Neurocomputing 55(1-2), 221–249 (2003)

    Article  Google Scholar 

  9. Weston, J., Mukherjee, S., Chapelle, O., Pontil, M., Poggio, T., Vapnik, V.: Feature Selection for Support Vector Machines. In: Proc. of 15th International Conference on Pattern Recognition, vol. 2 (2000)

    Google Scholar 

  10. Chapelle, O., Vapnik, V.: Model Selection for Support Vector Machines. In: Solla, S.A., Leen, T.K., Muller, K.-R. (eds.) Advances in Neural Information Processing Systems, vol. 12. MIT Press, Cambridge (2000)

    Google Scholar 

  11. Kwok, J.T.-Y.: The Evidence Framework Applied to Support Vector Machines. IEEE-NN 11(5) (2000)

    Google Scholar 

  12. Friedman, J., Hastie, T., Tibshirani, R.: The Elements of Statistical Learning. Springer, Heidelberg (2001)

    MATH  Google Scholar 

  13. Kutin, S., Niyogi, P.: Almost-everywhere algorithmic stability and generalization error. Tech. Rep. TR-2002-03: University of Chicago (2002)

    Google Scholar 

  14. Bousquet, O., Elisseeff, A.: Algorithmic stability and generalization performance. In: Advances in Neural Information Processing Systems, vol. 13 (2001)

    Google Scholar 

  15. Vorontsov, K.V.: Combinatorial substantiation of learning algorithms. Journal of Comp. Maths Math. Phys. 44(11), 1997–2009 (2004), http://www.ccas.ru/frc/papers/voron04jvm-eng.pdf

    MathSciNet  Google Scholar 

  16. Rissanen, J.: Modelling by the shortest data description. Automatica 14 (1978)

    Google Scholar 

  17. Van Gestel, T., Suykens, J., Lanckriet, G., Lambrechts, A., De Moor, B., Vandewalle, J.: Bayesian Framework for Least Squares Support Vector Machine Classifiers, Gaussian Processes and Kernel Fisher Discriminant Analysis. Neural Computation 15(5), 1115–1148 (2002)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kropotov, D., Vetrov, D., Ptashko, N., Vasiliev, O. (2006). The Use of Stability Principle for Kernel Determination in Relevance Vector Machines. In: King, I., Wang, J., Chan, LW., Wang, D. (eds) Neural Information Processing. ICONIP 2006. Lecture Notes in Computer Science, vol 4232. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11893028_81

Download citation

  • DOI: https://doi.org/10.1007/11893028_81

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-46479-2

  • Online ISBN: 978-3-540-46480-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics