Skip to main content

Efficient Approximations of Kernel Robust Soft LVQ

  • Conference paper
Advances in Self-Organizing Maps

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 198))

Abstract

Robust soft learning vector quantization (RSLVQ) constitutes a probabilistic extension of learning vector quantization (LVQ) based on a labeled Gaussian mixture model of the data. Training optimizes the likelihood ratio of the model and recovers a variant similar to LVQ2.1 in the limit of small bandwidth. Recently, RSLVQ has been extended to a kernel version, thus opening the way towards more general data structures characterized in terms of a Gram matrix only. While leading to state of the art results, this extension has the drawback that models are no longer sparse, and quadratic training complexity is encountered. In this contribution, we investigate two approximation schemes which lead to sparse models: k-approximations of the prototypes and the Nyström approximation of the Gram matrix. We investigate the behavior of these approximations in a couple of benchmarks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Boulet, R., Jouve, B., Rossi, F., Villa, N.: Batch kernel SOM and related Laplacian methods for social network analysis. Neurocomputing 71(7-9), 1257–1273 (2008)

    Article  Google Scholar 

  2. Chen, Y., Garcia, E.K., Gupta, M.R., Rahimi, A., Cazzanti, L.: Similarity-based classification: Concepts and algorithms. JMLR 10, 747–776 (2009)

    MathSciNet  MATH  Google Scholar 

  3. Cottrell, M., Hammer, B., Hasenfuss, A., Villmann, T.: Batch and median neural gas. Neural Networks 19, 762–771 (2006)

    Article  MATH  Google Scholar 

  4. Frasconi, P., Gori, M., Sperduti, A.: A general framework for adaptive processing of data structures. IEEE TNN 9(5), 768–786 (1998)

    Google Scholar 

  5. Gärtner, T.: Kernels for Structured Data. PhD thesis, Univ. Bonn (2005)

    Google Scholar 

  6. Hammer, B., Hasenfuss, A.: Topographic mapping of large dissimilarity datasets. Neural Computation 22(9), 2229–2284 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  7. Hofmann, D., Hammer, B.: Kernel Robust Soft Learning Vector Quantization. In: Mana, N., Schwenker, F., Trentin, E. (eds.) ANNPR 2012. LNCS, vol. 7477, pp. 14–23. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  8. Kirstein, S., Wersing, H., Gross, H.-M., Körner, E.: A life-long learning vector quantization approach for interactive learning of multiple categories. Neural Networks 28, 90–105 (2012)

    Article  Google Scholar 

  9. Kohonen, T.: Self-Oganizing Maps, 3rd edn. Springer (2000)

    Google Scholar 

  10. Kohonen, T., Somervuo, P.: How to make large self-organizing maps for nonvectorial data. Neural Networks 15(8-9), 945–952 (2002)

    Article  Google Scholar 

  11. Pekalska, E., Duin, R.P.: The Dissimilarity Representation for Pattern Recognition. Foundations and Applications. World Scientific (2005)

    Google Scholar 

  12. Qin, A.K., Suganthan, P.N.: A novel kernel prototype-based learning algorithm. In: Proc. of the 17th International Conference on Pattern Recognition (ICPR 2004), Cambridge, UK (August 2004)

    Google Scholar 

  13. Sato, A., Yamada, K.: Generalized Learning Vector Quantization. In: NIPS (1995)

    Google Scholar 

  14. Schneider, P., Biehl, M., Hammer, B.: Distance learning in discriminative vector quantization. Neural Computation 21, 2942–2969 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  15. Schneider, P., Biehl, M., Hammer, B.: Hyperparameter learning in probabilistic prototype-based models. Neuromputing 73(7-9), 1117–1124 (2009)

    Article  Google Scholar 

  16. Seo, S., Obermayer, K.: Soft learning vector quantization. Neural Computation 15, 1589–1604 (2003)

    Article  MATH  Google Scholar 

  17. Vellido, A., Martin-Guerroro, J.D., Lisboa, P.: Making machine learning models interpretable. In: ESANN 2012 (2012)

    Google Scholar 

  18. Williams, C.K.I., Seeger, M.: Using the Nyström method to speed up kernel machines. Advances in Neural Information Processing Systems 13, 682–688 (2001)

    Google Scholar 

  19. Zhu, X., Gisbrecht, A., Schleif, F.-M., Hammer, B.: Approximation techniques for clustering dissimilarity data. Neurocomputing 90, 72–84 (2012)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniela Hofmann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hofmann, D., Gisbrecht, A., Hammer, B. (2013). Efficient Approximations of Kernel Robust Soft LVQ. In: Estévez, P., Príncipe, J., Zegers, P. (eds) Advances in Self-Organizing Maps. Advances in Intelligent Systems and Computing, vol 198. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-35230-0_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-35230-0_19

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-35229-4

  • Online ISBN: 978-3-642-35230-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics