Abstract
We investigate the learnability of nested differences of intersection-closed classes in the presence of malicious noise. Examples of intersection-closed classes include axis-parallel rectangles, monomials, linear sub-spaces, and so forth. We present an on-line algorithm whose mistake bound is optimal in the sense that there are concept classes for which each learning algorithm (using nested differences as hypotheses) can be forced to make at least that many mistakes. We also present an algorithm for learning in the PAC model with malicious noise. Surprisingly enough, the noise rate tolerable by these algorithms does not depend on the complexity of the target class but depends only on the complexity of the underlying intersection-closed class.
Supported by grant J01028-MAT from the Fonds zur Förderung der wissenschaftlichen Forschung, Austria.
Preview
Unable to display preview. Download preview PDF.
References
Peter Auer and Nicolò Cesa-Bianchi. On-line learning with malicious noise and the closure algorithm. In Setsno Arikawa and Klaus P. Jantke, editors, Algorithmic Learnung Theory. AII'94. ALT'94, pages 229–247. Lecture Notes in Artificial Intelligence 872, Springer, 1994.
D. Angluin. Queries and concept: learning. Machine Learning, 2(4):319–342, April 1988.
M. Anthony and J. Shawe-Taylor. A result of Vapnik with applications. Discrete Applied Mathematics. 47:207–217, 1993.
Peter Auer. On-line learning of rectangles in noisy environments. In Proceedings of the Sixth Annual ACM Conference on Computational Learning Theory. pages 253–261. ACM Press, 1993.
D. Haussler, N. Littlestone, and M. K. Warmuth. Predicting {0,1} functions on randomly drawn points. In Proceedings of the 29th Annual IEEE Symposium on Foundations of Computer Science, pages 100–109. IEEE Computer Society Press, 1988.
David Helmbold, Robert Sloan, and Manfred K. Warmuth. Lerning nested differences of intersection-closed concept classes. Machine Learning, 5:165–196, 1990.
D. Helmbold, R. Sloan, and M. K. Warmuth. Learning integer lattices. SIAM J. Comput., 21(2):240–266, 1992.
M. Kearns and M. Li. Learning in the presence of malicious errors. SIAM J. Comput., 22:807–837, 1993.
N. Littlestone. Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm. Machine Learning. 2:285–318, 1988.
B. K. Natarajan. Machine Learning: A Theoretical Approach. Morgan Kaufmann, San Mateo. CA, 1991.
L. G. Valiant. A theory of the learnable. Commun. ACM. 27(11):1134–1142, November 1984.
V. N. Vapnik and A. Y. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probab. and its Applications. 16(2):264–280, 1971.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Auer, P. (1995). Learning nested differences in the presence of malicious noise. In: Jantke, K.P., Shinohara, T., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 1995. Lecture Notes in Computer Science, vol 997. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-60454-5_33
Download citation
DOI: https://doi.org/10.1007/3-540-60454-5_33
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-60454-9
Online ISBN: 978-3-540-47470-8
eBook Packages: Springer Book Archive