Abstract
The paper proposes a model merging a non-parametric k-nearest-neighbor (kNN) method into an underlying support vector machine (SVM) to produce an instance-dependent loss function. In this model, a filtering stage of the kNN searching was employed to collect information from training examples and produced a set of emphasized weights which can be distributed to every example by a class of real-valued class labels. The emphasized weights changed the policy of the equal-valued impacts of the training examples and permitted a more efficient way to utilize the information behind the training examples with various significance levels. Due to the property of estimating density locally, the kNN method has the advantage to distinguish the heterogeneous examples from the regular examples by merely considering the situation of the examples themselves. The paper shows the model is promising with both the theoretical derivations and consequent experimental results.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, Heidelberg (1995)
Vapnik, V.N.: Statistical Learning Theory. John Wiley and Sons, New York (1998)
Vapnik, V.N.: An Overview of Statistical Learning Theory. IEEE Transactions on Neural Networks 10, 988–999 (1999)
Schölkopf, B., Smola, A.J.: Learning with Kernels. MIT Press, Cambridge, MA (2002)
Cover, T.M., Hart, P.E.: Nearest Neighbor Pattern Classification. IEEE Transactions on Information Theory 13, 21–27 (1967)
Duda, R.O., Hart, P.E.: Pattern Classification and Scene Analysis. John Wiley and Sons, New York (1973)
Fukunaga, K.: Statistical Pattern Recognition, 2nd edn. Academic Press, San Diego, CA (1990)
Cortes, C., Vapnik, V.N.: Support Vector Networks. Machine Learning 20, 273–297 (1995)
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer-Verlag, Berlin Heidelberg New York (2001)
Bartlett, P.L., Jordan, M.I., McAuliffe, J.D.: Convexity, Classification, and Risk Bounds. In: Technical Report 638, Department of Statistics, University of California Berkeley, CA (2003)
Lin, Y.: A Note on Margin-Based Loss Functions in Classification. Statistics and Probability Letters 68(1), 73–82 (2004)
Zhang, T.: Statistical Behavior and Consistency of Classification Methods Based on Convex Risk Minimization. The Annals of Statistics 32, 56–85 (2004)
Steinwart, I.: Consistency of Support Vector Machines and Other Regularized Kernel Classifiers. IEEE Transactions on Information Theory 51(1), 128–142 (2005)
Yang, C.-Y.: Support Vector Classifier with a Fuzzy-Value Class Label. In: Yin, F.-L., Wang, J., Guo, C. (eds.) ISNN 2004. LNCS, vol. 3173, Springer, Heidelberg (2004)
Hsu, C.-C., Yang, C.-Y., Yang, J.-S.: Associating kNN and SVM for Higher Classification Accuracy. In: Hao, Y., Liu, J., Wang, Y.-P., Cheung, Y.-m., Yin, H., Jiao, L., Ma, J., Jiao, Y.-C. (eds.) CIS 2005. LNCS (LNAI), vol. 3801, Springer, Heidelberg (2005)
Breiman, L.: Bias, Variance and Arcing Classifiers. Technical Report 460, Department of Statistics, University of California Berkeley, CA (1996)
Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Analysis. MIT Press, Cambridge, MA (2004)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Yang, CY., Hsu, CC., Yang, JS. (2007). Learning SVM with Varied Example Cost: A kNN Evaluating Approach. In: Wang, Y., Cheung, Ym., Liu, H. (eds) Computational Intelligence and Security. CIS 2006. Lecture Notes in Computer Science(), vol 4456. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74377-4_35
Download citation
DOI: https://doi.org/10.1007/978-3-540-74377-4_35
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-74376-7
Online ISBN: 978-3-540-74377-4
eBook Packages: Computer ScienceComputer Science (R0)