Abstract
In the paper, we construct a framework which allows to bound polynomially the distributions produced by certain boosting algorithms, without significant performance loss.
Further,we study the case of Freund and Schapire’s AdaBoost algorithm, bounding its distributions to near-polynomial w.r.t. the example oracle’s distribution. An advantage of AdaBoost over other boosting techniques is that it doesn’t require an a-priori accuracy lower bound for the hypotheses accepted from the weak learner during the learning process.
We turn AdaBoost into an on-line boosting algorithm (boosting “by filtering”), which can be applied to the wider range of learning problems.
In particular, now AdaBoost applies to the problem of DNF-learning, answering affirmatively the question posed by Jackson.
We also construct a hybrid boosting algorithm, in that way achieving the lowest bound possible for booster-produced distributions (in terms of Õ), and show a possible application to the problem of DNF-learning w.r.t. the uniform.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
N. Bshouty, J. Jackson, C. Tamon. More efficient PAC-learning of DNF with membership queries under the uniform distribution. Twelfth Annual Conference on Computational Learning Theory, 1999, pp. 286–295.
Y. Freund. Boosting a weak learning algorithm by majority. Information and Computation, 1995, 121(2), pp. 256–285.
Y. Freund, R.E. Schapire. A decision-theoretic generalization of online learning and an application to boosting. Journal of Computer and System Sciences, 1997, 55(1), pp. 119–139.
J. Jackson. An efficient membership-query algorithm for learning DNF with respect to the uniform distribution. Journal of Computer and System Sciences, 1997, 55(3),pp. 414–440.
E. Kushilevitz, Y. Mansour. Learning Decision Trees using the Fourier Spectrum. SIAM Journal on Computing, 1993, 22(6) pp. 1331–1348.
A.R. Klivans, R.A. Servedio. Boosting and Hard-Core Sets. Proc. of 40th Foundations of Computer Science (FOCS), 1999, pp. 624–633.
M. Kearns, L. Valiant. Cryptographic limitations on learning boolean formulae and finite automata. Journal of the ACM, 1994, 41(1) pp. 67–95.
L. Levin. Randomness and Non-determinism. Journal of Symbolic Logic, 1993, 58(3), pp. 1102–1103.
L. Valiant. A theory of learnable. Comm. ACM, 1984, 27, pp. 1134–1142.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bshouty, N.H., Gavinsky, D. (2001). On Boosting with Optimal Poly-Bounded Distributions. In: Helmbold, D., Williamson, B. (eds) Computational Learning Theory. COLT 2001. Lecture Notes in Computer Science(), vol 2111. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44581-1_32
Download citation
DOI: https://doi.org/10.1007/3-540-44581-1_32
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42343-0
Online ISBN: 978-3-540-44581-4
eBook Packages: Springer Book Archive