Abstract
We consider the question of learning in the presence of classification noise. More specifically, we address the issue of identifying conditions that, once a learning algorithm meets them, it can be transformed into a noise-tolerant algorithm.
While the question of whether every PAC learning algorithm can be made noise-tolerant is still open, the bottom line of this work, loosely stated, is that any restriction on the amount of data an algorithm is allowed to retrieve from its input samples, suffices to render the existence of a noise-tolerant variant that is efficient whenever the original algorithm is. The result is obtained by proving that such a restricted learning is equivalent to learning from statistical queries, and by applying Kearns transformation from statistical learning into noise-tolerant learning.
This is a preview of subscription content, log in via an institution.
Preview
Unable to display preview. Download preview PDF.
References
Dana Angluin and Philip Laird. Learning from noisy examples. Machine Learning, 2(4):343–370, 1988.
Avrim Blum and Prasad Chalasani. Learning switching concepts. In 5th COLT, pages 231–242, 1992.
Shai Ben-David, Benny Chor, Oded Goldreich, and Michael Luby. On the theory of average case complexity. Journal of Computer and System Sciences, 44(2):193–219, 1992.
Shai Ben-David and Eli Dichterman. Learning with restricted focus of attention. In 6th COLT, pages 287–296, 1993.
Gyora M. Benedek and Alon Itai. Learnability by fixed distributions. In 1st COLT, pages 80–90, August 1988.
Merrick L. Furst, Jeffrey C. Jackson, and Sean W. Smith. Improved learning of AC0 functions. In 4th COLT, pages 317–325, August 1991.
Michael J. Kearns. Efficient noise-tolrant learning from statistical queries. In 25th STOC, pages 392–401, May 1993.
Michael J. Kearns and Ming Li. Learning in the presence of malicious errors. In 20th STOC, pages 267–280, May 1988.
Michael J. Kearns and Robert E. Schapire. Efficient distribution-free learning of probabilistic concepts. In 31st FOCS, pages 382–391, 1990.
Michael J. Kearns, Robert E. Schapire, and Linda M. Sellie. Towards efficient agnostic learning. In 5th COLT, pages 341–352, 1992.
Philip D. Laird. Learning from good and bad data. Technical Report YALEU/DCS/TR-551, Yale University, 1987. Ph.d. Dissertation.
Nathan Linial, Yishai Mansour, and Noam Nisan. Constant depth circuits, fourier transform, and learnability. In 30th FOCS, pages 574–579, 1989.
Robert H. Sloan. Types of noise in data for concept learning. In 1st COLT, pages 91–96, 1988.
L. G. Valiant. A theory of the learnable. CACM, 27(11):1134–1142, 1984.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1994 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ben-David, S., Dichterman, E. (1994). Learnability with restricted focus of attention guarantees noise-tolerance. In: Arikawa, S., Jantke, K.P. (eds) Algorithmic Learning Theory. AII ALT 1994 1994. Lecture Notes in Computer Science, vol 872. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-58520-6_68
Download citation
DOI: https://doi.org/10.1007/3-540-58520-6_68
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-58520-6
Online ISBN: 978-3-540-49030-2
eBook Packages: Springer Book Archive