Skip to main content

Solving Semi-infinite Linear Programs Using Boosting-Like Methods

  • Conference paper
Algorithmic Learning Theory (ALT 2006)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4264))

Included in the following conference series:

  • 751 Accesses

Abstract

Linear optimization problems (LPs) with a very large or even infinite number of constraints frequently appear in many forms in machine learning. A linear program with m constraints can be written as

$$ \begin{array}{lll} \min\limits_{{\mathbf{x}}\in{\mathcal{P}}^n} && {\bf c}^\top {\mathbf{x}} \\ \mbox{with} && {\bf a}_j^\top {\mathbf{x}}\leq b_j \quad \forall i=1,\ldots,m,\end{array} $$

where I assume for simplicity that the domain of x is the n dimensional probability simplex \({\mathcal{P}}^n\). Optimization problems with an infinite number of constraints of the form \({\bf a}_j^\top {\mathbf{x}}\leq b_j\), for all jJ, are called semi-infinite, when the index set J has infinitely many elements, e.g. J=ℝ. In the finite case the constraints can be described by a matrix with m rows and n columns that can be used to directly solve the LP. In semi-infinite linear programs (SILPs) the constraints are often given in a functional form depending on j or implicitly defined, for instance by the outcome of another algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Meir, R., Rätsch, G.: An introduction to boosting and leveraging. In: Mendelson, S., Smola, A. (eds.) Advanced Lectures on Machine Learning. LNCS, pp. 119–184. Springer, Heidelberg (2003)

    Google Scholar 

  2. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  3. Altun, Y., Tsochantaridis, I., Hofmann, T.: Hidden Markov support vector machines. In: Proc. ICML 2003, pp. 3–10. AAAI Press, Menlo Park (2003)

    Google Scholar 

  4. Rätsch, G., Sonnenburg, S., Srinivasan, J., Witte, H., Müller, K.-R., Sommer, R., Schölkopf, B.: Improving the C. elegans genome annotation using machine learning. PLoS Computational Biology (under revision, 2006)

    Google Scholar 

  5. Cortes, C., Vapnik, V.N.: Support vector networks. Machine Learning 20, 273–297 (1995)

    MATH  Google Scholar 

  6. Lanckriet, G., Cristianini, N., Ghaoui, L., Bartlett, P., Jordan, M.: Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research 5, 27–72 (2004)

    Google Scholar 

  7. Sonnenburg, S., Rätsch, G., Schäfer, C., Schölkopf, B.: Large scale multiple kernel learning. Journal of Machine Learning Research, 1531–1565 (July 2006)

    Google Scholar 

  8. Bach, F., Lanckriet, G., Jordan, M.: Multiple kernel learning, conic duality, and the SMO algorithm. In: Brodley, C.E. (ed.) Proc. ICML 2004. ACM, New York (2004)

    Google Scholar 

  9. Demiriz, A., Bennett, K.P., Shawe-Taylor, J.: Linear programming boosting via column generation. Machine Learning 46, 225–254 (2002)

    Article  MATH  Google Scholar 

  10. Hettich, R., Kortanek, K.O.: Semi-infinite programming: Theory, methods and applications. SIAM Review 3, 380–429 (1993)

    Article  MathSciNet  Google Scholar 

  11. Rätsch, G., Demiriz, A., Bennett, K.: Sparse regression ensembles in infinite and finite hypothesis spaces. Machine Learning 48(1-3), 193–221 (2002)

    Google Scholar 

  12. Rätsch, G.: Robust Boosting via Convex Optimization. PhD thesis, University of Potsdam, Neues Palais 10, 14469 Potsdam, Germany (October 2001)

    Google Scholar 

  13. Rätsch, G., Warmuth, M.K.: Efficient margin maximization with boosting. Journal of Machine Learning Research 6, 2131–2152 (2005)

    Google Scholar 

  14. Warmuth, M.K., Liao, J., Rätsch, G.: Totally corrective boosting algorithms that maximize the margin. In: Cohen, W., Moore, A. (eds.) ICML 2006, pp. 1001–1008. ACM Press, New York (2006)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Rätsch, G. (2006). Solving Semi-infinite Linear Programs Using Boosting-Like Methods. In: Balcázar, J.L., Long, P.M., Stephan, F. (eds) Algorithmic Learning Theory. ALT 2006. Lecture Notes in Computer Science(), vol 4264. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11894841_2

Download citation

  • DOI: https://doi.org/10.1007/11894841_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-46649-9

  • Online ISBN: 978-3-540-46650-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics