Skip to main content

Feature Extraction Using Support Vector Machines

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6444))

Abstract

We discuss feature extraction by support vector machines (SVMs). Because the coefficient vector of the hyperplane is orthogonal to the hyperplane, the vector works as a projection vector. To obtain more projection vectors that are orthogonal to the already obtained projection vectors, we train the SVM in the complementary space of the space spanned by the already obtained projection vectors. This is done by modifying the kernel function. We demonstrate the validity of this method using two-class benchmark data sets.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Schölkopf, B., Mika, S., Burges, C.J.C., Knirsch, P., Möller, K.R., Smola, A.J.: Input space vs. feature space in kernel-based methods. IEEE Transactions on Neural Networks 10(5), 1000–1017 (1999)

    Article  Google Scholar 

  2. Schölkopf, B., Smola, A.J., Möller, K.R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation 10, 1299–1319 (1998)

    Article  Google Scholar 

  3. Mike, S., Rötsch, G., Weston, J., Schölkopf, B., Möller, K.R.: Fisher discriminant analysis with kernels. In: Neural Networks for Signal Processing IX, pp. 41–48 (1999)

    Google Scholar 

  4. Raducanu, B., Vitrià, J.: Online nonparametric discriminant analysis for incremental subspace learning and recognition. Pattern Analysis & Applications 11, 259–268 (2008)

    Article  MathSciNet  Google Scholar 

  5. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (1995)

    Book  MATH  Google Scholar 

  6. Abe, S.: Support Vector Machines for Pattern Classification (Advances in Pattern Recognition), 2nd edn. Springer, London (2010)

    Book  MATH  Google Scholar 

  7. Yabuwaki, R., Abe, S.: Convergence improvement of active set support vector training. In: Proc. International Joint Conference on Neural Networks, pp. 1426–1430 (2010)

    Google Scholar 

  8. Rätsch, G., Onda, T., Müller, K.R.: Soft margins for AdaBoost. Machine Learning 42(3), 287–320 (2001)

    Article  MATH  Google Scholar 

  9. http://ida.first.fraunhofer.de/projects/bench/benchmarks.htm

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tajiri, Y., Yabuwaki, R., Kitamura, T., Abe, S. (2010). Feature Extraction Using Support Vector Machines. In: Wong, K.W., Mendis, B.S.U., Bouzerdoum, A. (eds) Neural Information Processing. Models and Applications. ICONIP 2010. Lecture Notes in Computer Science, vol 6444. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-17534-3_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-17534-3_14

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-17533-6

  • Online ISBN: 978-3-642-17534-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics