Skip to main content

Adaptive Hausdorff Distances and Tangent Distance Adaptation for Transformation Invariant Classification Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9949))

Abstract

Tangent distances (TDs) are important concepts for data manifold distance description in machine learning. In this paper we show that the Hausdorff distance is equivalent to the TD for certain conditions. Hence, we prove the metric properties for TDs. Thereafter, we consider those TDs as dissimilarity measure in learning vector quantization (LVQ) for classification learning of class distributions with high variability. Particularly, we integrate the TD in the learning scheme of LVQ to obtain a TD adaption during LVQ learning. The TD approach extends the classical prototype concept to affine subspaces. This leads to a high topological richness compared to prototypes as points in the data space. By the manifold theory of TDs we can ensure that the affine subspaces are aligned in directions of invariant transformations with respect to class discrimination. We demonstrate the superiority of this new approach by two examples.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    The following statements remain also true if we assume that \(\left( \mathbb {M},+\right) \) is additional a group instead of a vector space; \(U_{\mathbf {w}}\) is a subgroup instead of a subspace and \(\mathcal {V}\), \(\mathcal {W}\) are left cosets instead of affine subspaces.

References

  1. Kohonen, T.: Self-Organizing Maps. Springer Series in Information Sciences, vol. 30. Springer, Heidelberg (1995). Second Extended Edition 1997

    MATH  Google Scholar 

  2. Schölkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002)

    MATH  Google Scholar 

  3. Biehl, M., Hammer, B., Schleif, F.-M., Schneider, P., Villmann, T.: Stationarity of matrix relevance LVQ. In: Proceedings of the International Joint Conference on Neural Networks 2015 (IJCNN), pp. 1–8. IEEE Computer Society Press, Los Alamitos (2015)

    Google Scholar 

  4. Xu, H., Caramanis, C., Mannor, S.: Robustness and regularization of support vector machines. J. Mach. Learn. Res. 10, 1485–1510 (2009)

    MathSciNet  MATH  Google Scholar 

  5. Decoste, D., Schölkopf, B.: Training invariant support vector machines. Mach. Learn. 46, 161–190 (2002)

    Article  MATH  Google Scholar 

  6. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  7. Simard, P., LeCun, Y., Denker, J.S.: Efficient pattern recognition using a new transformation distance. In: Hanson, S.J., Cowan, J.D., Giles, C.L. (eds.) Advances in Neural Information Processing Systems 5, pp. 50–58. Morgan-Kaufmann, San Mateo (1993)

    Google Scholar 

  8. Schneider, P., Hammer, B., Biehl, M.: Adaptive relevance matrices in learning vector quantization. Neural Comput. 21, 3532–3561 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  9. Henrikson, J.: Completeness and total boundedness of the Hausdorff metric. MIT Undergrad. J. Math. 1, 69–79 (1999)

    Google Scholar 

  10. Pekalska, E., Duin, R.P.W.: The Dissimilarity Representation for Pattern Recognition: Foundations and Applications. World Scientific, Singapore (2006)

    MATH  Google Scholar 

  11. Villmann, T., Kaden, M., Nebel, D., Bohnsack, A.: Similarities, dissimilarities and types of inner products for data analysis in the context of machine learning. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2016. LNCS (LNAI), vol. 9693, pp. 125–133. Springer, Heidelberg (2016). doi:10.1007/978-3-319-39384-1_11

    Google Scholar 

  12. Saralajew, S., Villmann, T.: Adaptive tangent distances in generalized learning vector quantization for transformation and distortion invariant classification learning. In: Proceedings of the International Joint Conference on Neural Networks 2016 (IJCNN), pp. 1–8, Vancouver, Canada, (2016)

    Google Scholar 

  13. Kohonen, T.: Improved versions of learning vector quantization. In: Proceedings of the IJCNN-90, International Joint Conference on Neural Networks, San Diego, vol. I, pp. 545–550. IEEE Service Center, Piscataway (1990)

    Google Scholar 

  14. Sato, A., Yamada, K.: Generalized learning vector quantization. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems 8, Proceedings of the 1995 Conference, pp. 423–429. MIT Press, Cambridge (1996)

    Google Scholar 

  15. Kaden, M., Lange, M., Nebel, D., Riedel, M., Geweniger, T., Villmann, T.: Aspects in classification learning - review of recent developments in learning vector quantization. Found. Comput. Decis. Sci. 39(2), 79–105 (2014)

    MathSciNet  MATH  Google Scholar 

  16. Schwenk, H., Milgram, M.: Learning discriminant tangent models for handwritten character recognition. In: Fogelman-Soulié, F., Gallinari, P. (eds.) International Conference on Artificial Neural Networks, volume II, pp. 985–988. EC2 and Cie, Paris (1995)

    Google Scholar 

  17. Keysers, D., Macherey, W., Ney, H., Dahmen, J.: Adaptation in statistical pattern recognition using tangent vectors. IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 269–274 (2004)

    Article  Google Scholar 

  18. Chang, C.-C., Lin, C.-J.: LIBSVM : a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3:27), 1–27 (2011)

    Article  Google Scholar 

  19. Rossi, F., Lendasse, A., François, D., Wertz, V., Verleysen, M.: Mutual information for the selection of relevant variables in spectrometric nonlinear modelling. Chemometrics Intell. Lab. Syst. 80, 215–226 (2006)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Villmann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Saralajew, S., Nebel, D., Villmann, T. (2016). Adaptive Hausdorff Distances and Tangent Distance Adaptation for Transformation Invariant Classification Learning. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds) Neural Information Processing. ICONIP 2016. Lecture Notes in Computer Science(), vol 9949. Springer, Cham. https://doi.org/10.1007/978-3-319-46675-0_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46675-0_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46674-3

  • Online ISBN: 978-3-319-46675-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics