Abstract
Reliable and probably useful learning, proposed by Rivest and Sloan, is a variant of probably approximately correct learning. In this model the hypothesis must never misclassify an instance but is allowed to answer “I don't know” with a low probability. We derive upper and lower bounds for the sample complexity of reliable and probably useful learning in terms of the combinatorial characteristics of the concept class to be learned. This is done by reducing reliable and probably useful learning to learning with one-sided error. The bounds also hold for a slightly weaker model that allows the learner to output with a low probability a hypothesis that makes misclassifications. We see that in these models learning with one oracle is more difficult than learning with two oracles. Our results imply that monotone Boolean conjunctions or disjunctions cannot be learned reliably and probably usefully from a polynomial number of examples. Rectangles in ℝn forn ≥ 2 cannot be learned from any finite number of examples.
Similar content being viewed by others
References
A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth, Learnability and the Vapnik-Chervonenkis dimension,J. Assoc. Comput. Mach.,36 (1989), 929–965.
A. Ehrenfeucht, D. Haussler, M. Kearns, and L. Valiant, A general lower bound on the number of examples needed for learning,Inform. and Comput.,82 (1989), 247–261.
D. Haussler, M. Kearns, N. Littlestone, and M. K. Warmuth, Equivalence of models for polynomial learnability,Inform. and Comput.,95 (1991), 129–161.
M. Kearns and M. Li, Learning in the presence of malicious errors, inProceedings of the 20th ACM Symposium on Theory of Computing, The Association for Computing Machinery, New York, 1988, pp. 267–280.
M. Kearns, M. Li, L. Pitt, and L. Valiant, On the learnability of Boolean formulae, inProceedings of the 19th ACM Symposium on Theory of Computing, The Association for Computing Machinery, New York, 1987, pp. 285–295.
J. Kivinen, Reliable and useful learning with uniform probability distributions, in S. Arikawa, S. Goto, S. Ohsuga, and T. Yokomori, editors,Proceedings of the 1st International Workshop on Algorithmic Learning Theory, Japanese Society for Artificial Intelligence, Tokyo, 1990, pp. 209–222.
T. M. Mitchell, Version spaces: a candidate elimination approach to rule learning, inProceedings of the 5th International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence, Cambridge, MA, 1977, pp. 305–310.
B. K. Natarajan, Probably approximate learning of sets and functions,SIAM J. Comput.,20 (1991), 328–351.
L. Pitt and L. G. Valiant, Computational limitations on learning from examples,J. Assoc. Comput. Mach.,35 (1988), 965–984.
R. L. Rivest and R. Sloan, Learning complicated concepts reliably and usefully, in D. Haussler and L. Pitt, editors,Proceedings of the 1988 Workshop on Computational Learning Theory, Morgan Kaufmann, San Mateo, CA, 1988, pp. 69–79.
H. Shvaytser, A necessary condition for learning from positive examples,Mach. Learning,5 (1990), 101–113.
L. G. Valiant, A theory of the learnable,Comm. ACM,27 (1984), 1134–1142.
V. N. Vapnik and A. Ya. Chervonenkis, On the uniform convergence of the relative frequencies of events to their probabilities,Theory Probab. Appl.,16 (1971), 264–280.
Author information
Authors and Affiliations
Additional information
A preliminary version of this paper appeared under the title “Reliable and useful learning” inProceedings of the 2nd Annual Workshop on Computational Learning Theory, Morgan Kaufmann, San Mateo, CA, 1989, pp. 365–380. This work was supported by the Academy of Finland.
Rights and permissions
About this article
Cite this article
Kivinen, J. Learning reliably and with one-sided error. Math. Systems Theory 28, 141–172 (1995). https://doi.org/10.1007/BF01191474
Received:
Revised:
Accepted:
Issue Date:
DOI: https://doi.org/10.1007/BF01191474