Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 8206))

Abstract

Due to the increasing amount of large data sets, efficient learning algorithms are necessary. Also the interpretation of the final model is desirable to draw efficient conclusions from the model results. Prototype based learning algorithms have been extended recently to proximity learners to analyze data given in non-standard data formats. The supervised methods of this type are of special interest but suffer from a large number of optimization parameters to model the prototypes. In this contribution we derive an efficient core set based preprocessing to restrict the number of model parameters to \(O(\frac{n}{\epsilon^2})\) with n as the number of prototypes. Accordingly, the number of model parameters gets independent of the size of the data sets but scales with the requested precision ε of the core sets. Experimental results show that our approach does not significantly degrade the performance while significantly reducing the memory complexity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Belle, V.V., Lisboa, P.J.G.: Research directions in interpretable machine learning models. In: Proc. of ESANN 2013 (2013)

    Google Scholar 

  2. Biehl, M., Hammer, B., Schneider, P., Villmann, T.: Metric learning for prototype-based classification. In: Bianchini, M., Maggini, M., Scarselli, F., Jain, L.C. (eds.) Innovations in Neural Information Paradigms and Applications. SCI, vol. 247, pp. 183–199. Springer, Heidelberg (2009)

    Google Scholar 

  3. Vapnik, V.: The nature of statistical learning theory. Statistics for engineering and information science. Springer (2000)

    Google Scholar 

  4. Chen, H., Tino, P., Yao, X.: Probabilistic classification vector machines. IEEE Transactions on Neural Networks 20(6), 901–914 (2009)

    Article  Google Scholar 

  5. Schleif, F.M., Villmann, T., Hammer, B., Schneider, P.: Efficient kernelized prototype-based classification. Journal of Neural Systems 21(6), 443–457 (2011)

    Article  Google Scholar 

  6. Gisbrecht, A., Mokbel, B., Schleif, F.M., Zhu, X., Hammer, B.: Linear time relational prototype based learning. Journal of Neural Systems (2012) (in press)

    Google Scholar 

  7. Badoiu, M., Har-Peled, S., Indyk, P.: Approximate clustering via core-sets. In: STOC, pp. 250–257 (2002)

    Google Scholar 

  8. Tsang, I.H., Kocsor, A., Kwok, J.Y.: Large-scale maximum margin discriminant analysis using core vector machines. IEEE TNN 19(4), 610–624 (2008)

    Google Scholar 

  9. Schneider, P., Biehl, M., Hammer, B.: Distance learning in discriminative vector quantization. Neural Computation 21(10), 2942–2969 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  10. Biehl, M., Ghosh, A., Hammer, B.: Dynamics and generalization ability of lvq algorithms. Journal of Machine Learning Research 8, 323–360 (2007)

    MathSciNet  MATH  Google Scholar 

  11. Hammer, B., Hasenfuss, A.: Topographic mapping of large dissimilarity data sets. Neural Computation 22(9), 2229–2284 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  12. Seo, S., Obermayer, K.: Soft learning vector quantization. Neural Computation 15(7), 1589–1604 (2003)

    Article  MATH  Google Scholar 

  13. Hammer, B., Hoffmann, D., Schleif, F.M.: Learning vector quantization for (dis-)similarities. NeuroComputing (in press, 2013)

    Google Scholar 

  14. Schleif, F.-M., Gisbrecht, A.: Data analysis of (Non-)Metric proximities at linear costs. In: Hancock, E., Pelillo, M. (eds.) SIMBAD 2013. LNCS, vol. 7953, pp. 59–74. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  15. Tsang, I.W., Kwok, J.T., Cheung, P.M.: Core vector machines: Fast svm training on very large data sets. Journal of Machine Learning Research 6, 363–392 (2005)

    MathSciNet  MATH  Google Scholar 

  16. Martinetz, T., Schulten, K.: Topology representing networks. Neural Networks 7(3), 507–522 (1994)

    Article  Google Scholar 

  17. Chitta, R., Jin, R., Havens, T., Jain, A.: Approximate kernel k-means: Solution to large scale kernel clustering, pp. 895–903 (2011)

    Google Scholar 

  18. Har-Peled, S., Kushal, A.: Smaller coresets for k-median and k-means clustering. Discrete & Computational Geometry 37(1), 3–19 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  19. Schleif, F.M., Zhu, X., Gisbrecht, A., Hammer, B.: Fast approximated relational and kernel clustering. In: Proceedings of ICPR 2012, pp. 1229–1232. IEEE (2012)

    Google Scholar 

  20. Tsang, I.W., Kocsor, A., Kwok, J.T.: Simpler core vector machines with enclosing balls. In: Proc. of ICML 2007, pp. 911–918 (2007)

    Google Scholar 

  21. Frénay, B., Verleysen, M.: Parameter-insensitive kernel in extreme learning for non-linear support vector regression. Neurocomputing 74(16), 2526–2531 (2011)

    Article  Google Scholar 

  22. Mika, S., Rätsch, G., Weston, J., Schölkopf, B., Smola, A.J., Müller, K.R.: Invariant feature extraction and classification in kernel spaces. In: Solla, S.A., Leen, T.K., Müller, K.R. (eds.) NIPS, pp. 526–532. The MIT Press (1999)

    Google Scholar 

  23. Schneider, P., Geweniger, T., Schleif, F.M., Biehl, M., Villmann, T.: Multivariate class labeling in robust soft LVQ. In: Proceedings of ESANN 2011, pp. 17–22 (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Schleif, FM., Zhu, X., Hammer, B. (2013). Sparse Prototype Representation by Core Sets. In: Yin, H., et al. Intelligent Data Engineering and Automated Learning – IDEAL 2013. IDEAL 2013. Lecture Notes in Computer Science, vol 8206. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-41278-3_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-41278-3_37

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-41277-6

  • Online ISBN: 978-3-642-41278-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics