Skip to main content

On the Smallest Possible Dimension and the Largest Possible Margin of Linear Arrangements Representing Given Concept Classes Uniform Distribution

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2533))

Abstract

This paper discusses theoretical limitations of classification systems that are based on feature maps and use a separating hyper-plane in the feature space. In particular, we study the embeddability of a given concept class into a class of Euclidean half spaces of low dimension, or of arbitrarily large dimension but realizing a large margin. New bounds on the smallest possible dimension or on the largest possible margin are presented. In addition, we present new results on the rigidity of matrices and briefly mention applications in complexity and learning theory.

This work has been supported in part by the ESPRIT Working Group in Neural, Computational Learning II, NeuroCOLT2, No. 27150 and by the Deutsche Forschungsgemeinschaft Grant SI 498/4-1.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Rosa I. Arriaga and Santosh Vempala. An algorithmic theory of learning: Robust concepts and random projection. In Proceedings of the 40’th Annual Symposium on the Foundations of Computer Science, pages 616–623, 1999.

    Google Scholar 

  2. Shai Ben-David. Personal Communication.

    Google Scholar 

  3. Shai Ben-David, Nadav Eiron, and Hans Ulrich Simon. Limitations of learning via embeddings in Euclidean half spaces. In Proceedings of the 14th Annual Workshop on Computational Learning Theory, pages 385–401. Springer, 2001.

    Google Scholar 

  4. Bernhard E. Boser, Isabelle M. Guyon, and Vladimir N. Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, pages 144–152. ACM Press, 1992.

    Google Scholar 

  5. Nello Christianini and John Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, 2000.

    Google Scholar 

  6. Jürgen Forster. A linear bound on the unbounded error probabilistic communication complexity. In Proceedings of the 16th Annual Conference on Computational Complexity, pages 100–106, 2001.

    Google Scholar 

  7. Jürgen Forster, Matthias Krause, Satyanarayana V. Lokam, Rustam Mubarakzjanov, Niels Schmitt, and Hans U. Simon. Relations between communication complexity, linear arrangements, and computational complexity. In Proceedings of the 21’st Annual Conference on the Foundations of Software Technology and Theoretical Computer Science, pages 171–182, 2001.

    Google Scholar 

  8. Jürgen Forster, Niels Schmitt, and Hans Ulrich Simon. Estimating the optimal margins of embeddings in Euclidean half spaces. In Proceedings of the 14th Annual Workshop on Computational Learning Theory, pages 402–415. Springer, 2001.

    Google Scholar 

  9. P. Frankl and H. Maehara. The Johnson-Lindenstrauss lemma and the sphericity of some graphs. Journal of Combinatorial Theory (B), 44:355–362, 1988.

    Article  MATH  MathSciNet  Google Scholar 

  10. Gene H. Golub and Charles F. Van Loan. Matrix Computations. The John Hopkins University Press, 1991.

    Google Scholar 

  11. A. J. Hoffman and H. W. Wielandt. The variation of the spectrum of a normal matrix. Duke Mathematical Journal, 20:37–39, 1953.

    Article  MATH  MathSciNet  Google Scholar 

  12. Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1985.

    Google Scholar 

  13. Roger A. Horn and Charles R. Johnson. Topics in Matrix Analysis. Cambridge University Press, 1991.

    Google Scholar 

  14. W. B. Johnson and J. Lindenstrauss. Extensions of Lipshitz mapping into Hilbert spaces. Contemp. Math., 26:189–206, 1984.

    MATH  MathSciNet  Google Scholar 

  15. B. S. Kashin and A. A. Razborov. Improved lower bounds on the rigidity of Hadamard matrices. Mathematical Notes, 63(4):471–475, 1998.

    Article  MATH  MathSciNet  Google Scholar 

  16. Satyanarayana V. Lokam. Spectral methods for matrix rigidity with applications to size-depth tradeoffs and communication complexity. In Proceedings of the 36th Symposium on Foundations of Computer Science, pages 6–15, 1995.

    Google Scholar 

  17. Vladimir Vapnik. Statistical Learning Theory. Wiley Series on Adaptive and Learning Systems for Signal Processing, Communications, and Control. John Wiley & Sons, 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Forster, J., Simon, H.U. (2002). On the Smallest Possible Dimension and the Largest Possible Margin of Linear Arrangements Representing Given Concept Classes Uniform Distribution. In: Cesa-Bianchi, N., Numao, M., Reischuk, R. (eds) Algorithmic Learning Theory. ALT 2002. Lecture Notes in Computer Science(), vol 2533. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-36169-3_12

Download citation

  • DOI: https://doi.org/10.1007/3-540-36169-3_12

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-00170-6

  • Online ISBN: 978-3-540-36169-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics