Abstract
Stability and bias-variance analysis are two powerful tools to understand learning algorithms better. We use these tools to analyze learning the kernel matrix (LKM) algorithm. The motivation comes from: (i) LKM works in the transductive setting where both training and test data points are to be given apriori. Hence, it is worth knowing the stability of LKM under small variations in the data set and (ii) It has been argued that LKMs overfit the given data set. In particular we are interested in answering the following questions: (a) Is LKM a stable algorithm? (b) do they overfit (c) what is the bias behavior with different optimal kernels?. Our experimental results show that LKMs do not overfit the given data set. The stability analysis reveals that LKMs are unstable algorithms.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: bagging, boosting and variants. Machine Learning 36, 105–142 (1999)
Buciu, I., Kotropoulos, C., Pitas, I.: Demonstrating the stability of support vector machines for classification. Signal Processing 86(9), 2364 (2006)
Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery 2(2), 955 (1998)
Chapelle, O., Vapnik, V., Bousquet, O., Mukherjee, S.: Choosing multiple parameters for support vector machines. Machine Learning 46(1-3), 131–159 (2001)
Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines. Cambridge University Press, Cambridge (2000)
Sturm, J.F.: Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones. Optimization Methods and Software 11-12, 625–653 (1999)
Lanckriet, G.R.G., Cristianini, N., Bartlett, P., El Ghaoui, L., Jordan, M.I.: Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research 5, 27–72 (2004)
Ong, C.S., Smola, A.J., Williamson, R.C.: Learning the kernel with hyperkernels. Journal of Machine Learning Research 6, 1043–1071 (2005)
Rätsch, G.: Benchmark repository. Technical report, Intelligent Data Analysis Group, Fraunhofer-FIRST (2005)
Valentini, G., Dietterich, T.G.: Bias-variance analysis of support vector machines for the development of svm-based ensemble methods. Journal of Machine Learning Research 5, 725–775 (2004)
Vapnik, V.: Statistical Learning Theory. John Wiley and Sons, New York (1998)
Vapnik, V., Chapelle, O.: Bounds on error expectation for SVM. Neural Computation 12, 2013–2036 (2000)
Zhang, T.: Leave-one-out bounds for kernel methods. Neural Computation 15, 1397–1437 (2003)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer Berlin Heidelberg
About this paper
Cite this paper
Saradhi, V.V., Karnick, H. (2007). On the Stability and Bias-Variance Analysis of Kernel Matrix Learning. In: Kobti, Z., Wu, D. (eds) Advances in Artificial Intelligence. Canadian AI 2007. Lecture Notes in Computer Science(), vol 4509. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72665-4_38
Download citation
DOI: https://doi.org/10.1007/978-3-540-72665-4_38
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-72664-7
Online ISBN: 978-3-540-72665-4
eBook Packages: Computer ScienceComputer Science (R0)