Abstract
Linear optimization problems (LPs) with a very large or even infinite number of constraints frequently appear in many forms in machine learning. A linear program with m constraints can be written as
where I assume for simplicity that the domain of x is the n dimensional probability simplex \({\mathcal{P}}^n\). Optimization problems with an infinite number of constraints of the form \({\bf a}_j^\top {\mathbf{x}}\leq b_j\), for all j∈J, are called semi-infinite, when the index set J has infinitely many elements, e.g. J=ℝ. In the finite case the constraints can be described by a matrix with m rows and n columns that can be used to directly solve the LP. In semi-infinite linear programs (SILPs) the constraints are often given in a functional form depending on j or implicitly defined, for instance by the outcome of another algorithm.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Meir, R., Rätsch, G.: An introduction to boosting and leveraging. In: Mendelson, S., Smola, A. (eds.) Advanced Lectures on Machine Learning. LNCS, pp. 119–184. Springer, Heidelberg (2003)
Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)
Altun, Y., Tsochantaridis, I., Hofmann, T.: Hidden Markov support vector machines. In: Proc. ICML 2003, pp. 3–10. AAAI Press, Menlo Park (2003)
Rätsch, G., Sonnenburg, S., Srinivasan, J., Witte, H., Müller, K.-R., Sommer, R., Schölkopf, B.: Improving the C. elegans genome annotation using machine learning. PLoS Computational Biology (under revision, 2006)
Cortes, C., Vapnik, V.N.: Support vector networks. Machine Learning 20, 273–297 (1995)
Lanckriet, G., Cristianini, N., Ghaoui, L., Bartlett, P., Jordan, M.: Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research 5, 27–72 (2004)
Sonnenburg, S., Rätsch, G., Schäfer, C., Schölkopf, B.: Large scale multiple kernel learning. Journal of Machine Learning Research, 1531–1565 (July 2006)
Bach, F., Lanckriet, G., Jordan, M.: Multiple kernel learning, conic duality, and the SMO algorithm. In: Brodley, C.E. (ed.) Proc. ICML 2004. ACM, New York (2004)
Demiriz, A., Bennett, K.P., Shawe-Taylor, J.: Linear programming boosting via column generation. Machine Learning 46, 225–254 (2002)
Hettich, R., Kortanek, K.O.: Semi-infinite programming: Theory, methods and applications. SIAM Review 3, 380–429 (1993)
Rätsch, G., Demiriz, A., Bennett, K.: Sparse regression ensembles in infinite and finite hypothesis spaces. Machine Learning 48(1-3), 193–221 (2002)
Rätsch, G.: Robust Boosting via Convex Optimization. PhD thesis, University of Potsdam, Neues Palais 10, 14469 Potsdam, Germany (October 2001)
Rätsch, G., Warmuth, M.K.: Efficient margin maximization with boosting. Journal of Machine Learning Research 6, 2131–2152 (2005)
Warmuth, M.K., Liao, J., Rätsch, G.: Totally corrective boosting algorithms that maximize the margin. In: Cohen, W., Moore, A. (eds.) ICML 2006, pp. 1001–1008. ACM Press, New York (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Rätsch, G. (2006). Solving Semi-infinite Linear Programs Using Boosting-Like Methods. In: Balcázar, J.L., Long, P.M., Stephan, F. (eds) Algorithmic Learning Theory. ALT 2006. Lecture Notes in Computer Science(), vol 4264. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11894841_2
Download citation
DOI: https://doi.org/10.1007/11894841_2
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-46649-9
Online ISBN: 978-3-540-46650-5
eBook Packages: Computer ScienceComputer Science (R0)