Skip to main content

Learning SVM with Varied Example Cost: A kNN Evaluating Approach

  • Conference paper
  • 952 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4456))

Abstract

The paper proposes a model merging a non-parametric k-nearest-neighbor (kNN) method into an underlying support vector machine (SVM) to produce an instance-dependent loss function. In this model, a filtering stage of the kNN searching was employed to collect information from training examples and produced a set of emphasized weights which can be distributed to every example by a class of real-valued class labels. The emphasized weights changed the policy of the equal-valued impacts of the training examples and permitted a more efficient way to utilize the information behind the training examples with various significance levels. Due to the property of estimating density locally, the kNN method has the advantage to distinguish the heterogeneous examples from the regular examples by merely considering the situation of the examples themselves. The paper shows the model is promising with both the theoretical derivations and consequent experimental results.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, Heidelberg (1995)

    MATH  Google Scholar 

  2. Vapnik, V.N.: Statistical Learning Theory. John Wiley and Sons, New York (1998)

    MATH  Google Scholar 

  3. Vapnik, V.N.: An Overview of Statistical Learning Theory. IEEE Transactions on Neural Networks 10, 988–999 (1999)

    Article  Google Scholar 

  4. Schölkopf, B., Smola, A.J.: Learning with Kernels. MIT Press, Cambridge, MA (2002)

    Google Scholar 

  5. Cover, T.M., Hart, P.E.: Nearest Neighbor Pattern Classification. IEEE Transactions on Information Theory 13, 21–27 (1967)

    Article  MATH  Google Scholar 

  6. Duda, R.O., Hart, P.E.: Pattern Classification and Scene Analysis. John Wiley and Sons, New York (1973)

    MATH  Google Scholar 

  7. Fukunaga, K.: Statistical Pattern Recognition, 2nd edn. Academic Press, San Diego, CA (1990)

    MATH  Google Scholar 

  8. Cortes, C., Vapnik, V.N.: Support Vector Networks. Machine Learning 20, 273–297 (1995)

    MATH  Google Scholar 

  9. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer-Verlag, Berlin Heidelberg New York (2001)

    MATH  Google Scholar 

  10. Bartlett, P.L., Jordan, M.I., McAuliffe, J.D.: Convexity, Classification, and Risk Bounds. In: Technical Report 638, Department of Statistics, University of California Berkeley, CA (2003)

    Google Scholar 

  11. Lin, Y.: A Note on Margin-Based Loss Functions in Classification. Statistics and Probability Letters 68(1), 73–82 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  12. Zhang, T.: Statistical Behavior and Consistency of Classification Methods Based on Convex Risk Minimization. The Annals of Statistics 32, 56–85 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  13. Steinwart, I.: Consistency of Support Vector Machines and Other Regularized Kernel Classifiers. IEEE Transactions on Information Theory 51(1), 128–142 (2005)

    Article  MathSciNet  Google Scholar 

  14. Yang, C.-Y.: Support Vector Classifier with a Fuzzy-Value Class Label. In: Yin, F.-L., Wang, J., Guo, C. (eds.) ISNN 2004. LNCS, vol. 3173, Springer, Heidelberg (2004)

    Google Scholar 

  15. Hsu, C.-C., Yang, C.-Y., Yang, J.-S.: Associating kNN and SVM for Higher Classification Accuracy. In: Hao, Y., Liu, J., Wang, Y.-P., Cheung, Y.-m., Yin, H., Jiao, L., Ma, J., Jiao, Y.-C. (eds.) CIS 2005. LNCS (LNAI), vol. 3801, Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  16. Breiman, L.: Bias, Variance and Arcing Classifiers. Technical Report 460, Department of Statistics, University of California Berkeley, CA (1996)

    Google Scholar 

  17. Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Analysis. MIT Press, Cambridge, MA (2004)

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yang, CY., Hsu, CC., Yang, JS. (2007). Learning SVM with Varied Example Cost: A kNN Evaluating Approach. In: Wang, Y., Cheung, Ym., Liu, H. (eds) Computational Intelligence and Security. CIS 2006. Lecture Notes in Computer Science(), vol 4456. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74377-4_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-74377-4_35

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-74376-7

  • Online ISBN: 978-3-540-74377-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics