Skip to main content

Can Entropic Regularization Be Replaced by Squared Euclidean Distance Plus Additional Linear Constraints

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4005))

Abstract

There are two main families of on-line algorithms depending on whether a relative entropy or a squared Euclidean distance is used as a regularizer. The difference between the two families can be dramatic. The question is whether one can always achieve comparable performance by replacing the relative entropy regularization by the squared Euclidean distance plus additional linear constraints. We formulate a simple open problem along these lines for the case of learning disjunctions.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Khardon, R., Roth, D., Servedio, R.: Efficiency versus convergence of Boolean kernels for on-line learning algorithms. In: Advances in Neural Information Processing Systems 14, pp. 423–430. MIT Press, Cambridge (2001)

    Google Scholar 

  2. Kivinen, J., Warmuth, M.K.: Additive versus exponentiated gradient updates for linear prediction. Information and Computation 132(1), 1–64 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  3. Kivinen, J., Warmuth, M.K., Auer, P.: The perceptron algorithm vs. winnow: linear vs. logarithmic mistake bounds when few input variables are relevant. Artificial Intelligence 97, 325–343 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  4. Littlestone, N.: Learning when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning 2, 285–318 (1988)

    Google Scholar 

  5. Long, P.M., Wu, X.: Mistake bounds for maximum entropy discrimination. In: Advances in Neural Information Processing Systems, vol. 17, MIT Press, Cambridge (December 2004)

    Google Scholar 

  6. Warmuth, M.K., Vishwanathan, S.V.N.: Leaving the span. In: Auer, P., Meir, R. (eds.) COLT 2005. LNCS, vol. 3559, pp. 366–381. Springer, Heidelberg (2005); A longer journal version is in preperation

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Warmuth, M.K. (2006). Can Entropic Regularization Be Replaced by Squared Euclidean Distance Plus Additional Linear Constraints. In: Lugosi, G., Simon, H.U. (eds) Learning Theory. COLT 2006. Lecture Notes in Computer Science(), vol 4005. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11776420_48

Download citation

  • DOI: https://doi.org/10.1007/11776420_48

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-35294-5

  • Online ISBN: 978-3-540-35296-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics