Skip to main content

Robust Kernel Approximation for Classification

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10634))

Included in the following conference series:

Abstract

This paper investigates a robust kernel approximation scheme for support vector machine classification with indefinite kernels. It aims to tackle the issue that the indefinite kernel is contaminated by noises and outliers, i.e. a noisy observation of the true positive definite (PD) kernel. The traditional algorithms recovery the PD kernel from the observation with the small Gaussian noises, however, such way is not robust to noises and outliers that do not follow a Gaussian distribution. In this paper, we assume that the error is subject to a Gaussian-Laplacian distribution to simultaneously dense and sparse/abnormal noises and outliers. The derived optimization problem including the kernel learning and the dual SVM classification can be solved by an alternate iterative algorithm. Experiments on various benchmark data sets show the robustness of the proposed method when compared with other state-of-the-art kernel modification based methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The kernel matrix \(\mathbf {K}\) associated to a positive definite kernel \(\mathcal {K}\) is PSD.

  2. 2.

    The probability density function of a Gaussian random variable x is defined as \(f_{\mathcal {N}}(x)=\frac{1}{\sqrt{2\pi }\sigma _N} \exp \big ( -\frac{\sqrt{2}x^2}{\sigma _N^2} \big )\).

  3. 3.

    The probability density function of a Laplacian random variable x is defined as \(f_{\mathcal {L}}(x)=\frac{1}{\sqrt{2}\sigma _L} \exp \big ( -\frac{\sqrt{2}|x|}{\sigma _L} \big )\).

  4. 4.

    Accuracy is defined as the percentage of total instances predicted correctly.

  5. 5.

    Recall is the percentage of true positives that were correctly predicted positive.

References

  1. Schölkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2003)

    MATH  Google Scholar 

  2. Saigo, H., Vert, J.P., Ueda, N., Akutsu, T.: Protein homology detection using string alignment kernels. Bioinformatics 20(11), 1682–1689 (2004)

    Article  Google Scholar 

  3. Graepel, T., Herbrich, R., Bollmann-Sdorra, P., Obermayer, K.: Classification on pairwise proximity data. In: Proceedings of Advances in Neural Information Processing Systems, vol. 11, pp. 438–444 (1999)

    Google Scholar 

  4. Pekalska, E., Paclik, P., Duin, R.P.W.: A generalized kernel approach to dissimilarity-based classification. J. Mach. Learn. Res. 2(2), 175–211 (2002)

    MATH  MathSciNet  Google Scholar 

  5. Roth, V., Laub, J., Kawanabe, M., Buhmann, J.: Optimal cluster preserving embedding of nonmetric proximity data. IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1540–1551 (2003)

    Article  Google Scholar 

  6. Luss, R., d’Aspremont, A.: Support vector machine classification with indefinite kernels. In: Proceedings of Advances in Neural Information Processing Systems, pp. 953–960 (2008)

    Google Scholar 

  7. Ying, Y., Campbell, C., Girolami, M.: Analysis of SVM with indefinite kernels. In: Proceedings of Advances in Neural Information Processing Systems, pp. 2205–2213 (2009)

    Google Scholar 

  8. Liu, F., Liu, M., Zhou, T., Qiao, Y., Yang, J.: Incremental robust nonnegative matrix factorization for object tracking. In: Proceedings of the International Conference on Neural Information Processing, pp. 611–619 (2016)

    Google Scholar 

  9. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  10. Platt, J.C.: \(\ell _2\) Fast training of support vector machines using sequential minimal optimization. In: Advances in Kernel Methods (1999)

    Google Scholar 

  11. Blake, C., Merz, C.J.: UCI repository of machine learning databases (1998). http://archive.ics.uci.edu/ml/

  12. Hull, J.J.: A database for handwritten text recognition research. IEEE Trans. Pattern Anal. Mach. Intell. 16(5), 550–554 (1994)

    Article  Google Scholar 

  13. Huang, X., Suykens, J.A., Wang, S., Hornegger, J., Maier, A.: Classification with truncated \(\ell _1\) distance kernel. In: IEEE Transactions on Neural Networks and Learning Systems (2017)

    Google Scholar 

  14. Xu, H., Xue, H., Chen, X., Wang, Y.: Solving indefinite kernel support vector machine with difference of convex functions programming. In: Proceedings of AAAI Conference on Artificial Intelligence, pp. 1610–1616 (2017)

    Google Scholar 

  15. Yuille, A.L., Rangarajan, A.: The concave-convex procedure. Neural Comput. 15(4), 915–936 (2003)

    Article  MATH  Google Scholar 

  16. Ong, C.S., Mary, X., Smola, A.J.: Learning with non-positive kernels. In: Proceedings of International Conference on Machine Learning, pp. 81–89 (2004)

    Google Scholar 

  17. Loosli, G., Canu, S., Cheng, S.O.: Learning SVM in kreĭn spaces. IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1204–1216 (2016)

    Article  Google Scholar 

  18. Huang, X., Maier, A., Hornegger, J., Suykens, J.A.K.: Indefinite kernels in least squares support vector machines and principal component analysis. Appl. Comput. Harmonic Anal. 43(1), 162–172 (2017)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgment

This work was supported in part by the National Natural Science Foundation of China under Grant 61572315, Grant 6151101179, and Grant 61603248, in part by 863 Plan of China under Grant 2015AA042308.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jie Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Liu, F., Huang, X., Peng, C., Yang, J., Kasabov, N. (2017). Robust Kernel Approximation for Classification. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10634. Springer, Cham. https://doi.org/10.1007/978-3-319-70087-8_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70087-8_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70086-1

  • Online ISBN: 978-3-319-70087-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics