Skip to main content

Sparsification of Indefinite Learning Models

  • Conference paper
  • First Online:
  • 1195 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11004))

Abstract

The recently proposed Krĕin space Support Vector Machine (KSVM) is an efficient classifier for indefinite learning problems, but with a non-sparse decision function. This very dense decision function prevents practical applications due to a costly out of sample extension. In this paper we provide a post processing technique to sparsify the obtained decision function of a Krĕin space SVM and variants thereof. We evaluate the influence of different levels of sparsity and employ a Nyström approach to address large scale problems. Experiments show that our algorithm is similar efficient as the non-sparse Krĕin space Support Vector Machine but with substantially lower costs, such that also large scale problems can be processed.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Obtained by evaluating k(xy) for training points x, y.

  2. 2.

    A similar strategy for KSVM may be possible but is much more complicated because typically quite many points are support vectors and special sparse SVM solvers would be necessary.

References

  1. Alabdulmohsin, I.M., Cissé, M., Gao, X., Zhang, X.: Large margin classification with indefinite similarities. Mach. Learn. 103(2), 215–237 (2016)

    Article  MathSciNet  Google Scholar 

  2. Duin, R.P.W., Pekalska, E.: Non-euclidean dissimilarities: causes and informativeness. In: Hancock, E.R., Wilson, R.C., Windeatt, T., Ulusoy, I., Escolano, F. (eds.) SSPR /SPR. LNCS, vol. 6218, pp. 324–333. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14980-1_31

    Chapter  Google Scholar 

  3. Geoffrey, Z.Z., Davis, M., Mallat, S.G.: Adaptive time-frequency decompositions. SPIE J. Opt. Eng. 33(1), 2183–2191 (1994)

    Google Scholar 

  4. Gisbrecht, A., Schleif, F.-M.: Metric and non-metric proximity transformations at linear costs. Neurocomputing 167, 643–657 (2015)

    Article  Google Scholar 

  5. Gusfield, D.: Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology. Cambridge University Press, Cambridge (1997)

    Book  Google Scholar 

  6. Hassibi, B.: Indefinite metric spaces in estimation, control and adaptive filtering. Ph.D. thesis, Stanford University, Department of Electrical Engineering, Stanford (1996)

    Google Scholar 

  7. Hodgetts, C.J., Hahn, U.: Similarity-based asymmetries in perceptual matching. Acta Psychol. 139(2), 291–299 (2012)

    Article  Google Scholar 

  8. Ling, H., Jacobs, D.W.: Shape classification using the inner-distance. IEEE Trans. Pattern Anal. Mach. Intell. 29(2), 286–299 (2007)

    Article  Google Scholar 

  9. Loosli, G., Canu, S., Ong, C.S.: Learning SVM in Krein spaces. IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1204–1216 (2016)

    Article  Google Scholar 

  10. Luss, R., d’Aspremont, A.: Support vector machine classification with indefinite kernels. Math. Program. Comput. 1(2–3), 97–118 (2009)

    Article  MathSciNet  Google Scholar 

  11. Mwebaze, E., Schneider, P., Schleif, F.-M., et al.: Divergence based classification in learning vector quantization. Neurocomputing 74, 1429–1435 (2010)

    Article  Google Scholar 

  12. Olshausen, B.A., Field, D.J.: Sparse coding with an overcomplete basis set: a strategy employed by V1? Vis. Res. 37(23), 3311–3325 (1997)

    Article  Google Scholar 

  13. Ong, C.S., Mary, X., Canu, S., Smola, A.J.: Learning with non-positive kernels. In: (ICML 2004) (2004)

    Google Scholar 

  14. Pati, Y.C., Rezaiifar, R., Krishnaprasad, P.S.: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers, vol. 1, pp. 40–44, November 1993

    Google Scholar 

  15. Pekalska, E., Duin, R.: The Dissimilarity Representation for Pattern Recognition. World Scientific, Singapore (2005)

    Book  Google Scholar 

  16. Pekalska, E., Haasdonk, B.: Kernel discriminant analysis for positive definite and indefinite kernels. IEEE Trans. Pattern Anal. Mach. Intell. 31(6), 1017–1031 (2009)

    Article  Google Scholar 

  17. Scheirer, W.J., Wilber, M.J., Eckmann, M., Boult, T.E.: Good recognition is non-metric. Pattern Recogn. 47(8), 2721–2731 (2014)

    Article  Google Scholar 

  18. Schleif, F.-M., Tiño, P.: Indefinite proximity learning: a review. Neural Comput. 27(10), 2039–2096 (2015)

    Article  Google Scholar 

  19. Schleif, F.-M., Tiño, P.: Indefinite core vector machine. Pattern Recogn. 71, 187–195 (2017)

    Article  Google Scholar 

  20. Schnitzer, D., Flexer, A., Widmer, G.: A fast audio similarity retrieval method for millions of music tracks. Multimed. Tools Appl. 58(1), 23–40 (2012)

    Article  Google Scholar 

  21. Srisuphab, A., Mitrpanont, J.L.: Gaussian kernel approx algorithm for feedforward neural network design. Appl. Math. Comp. 215(7), 2686–2693 (2009)

    Article  Google Scholar 

  22. Tsang, I.H., Kwok, J.Y., Zurada, J.M.: Generalized core vector machines. IEEE TNN 17(5), 1126–1140 (2006)

    Google Scholar 

  23. UCI: Skin segmentation database, March 2016

    Google Scholar 

  24. Vapnik, V.N.: The Nature of Statistical Learning Theory. Statistics for Engineering and Information Science. Springer, New York (2000)

    Book  Google Scholar 

Download references

Acknowledgment

We would like to thank Gaelle Bonnet-Loosli for providing support with the Krĕin Space SVM.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Frank-Michael Schleif .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Schleif, FM., Raab, C., Tino, P. (2018). Sparsification of Indefinite Learning Models. In: Bai, X., Hancock, E., Ho, T., Wilson, R., Biggio, B., Robles-Kelly, A. (eds) Structural, Syntactic, and Statistical Pattern Recognition. S+SSPR 2018. Lecture Notes in Computer Science(), vol 11004. Springer, Cham. https://doi.org/10.1007/978-3-319-97785-0_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-97785-0_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-97784-3

  • Online ISBN: 978-3-319-97785-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics