Abstract
Learning from positive examples occurs very frequently in natural learning. The PAC learning model of Valiant takes many features of natural learning into account,but in most cases it fails to describe such kind of learning. We show that in order to make the learning from positive data possible, extra-information about the underlying distribution must be provided to the learner. We define a PAC learning model from positive and unlabeled examples. We also define a PAC learning model from positive and unlabeled statistical queries.Relations with PAC model ([Val84]), statistical query model ([Kea93]) and constant-partition classification noise model ([Dec97]) are studied. We show that k DNF and k decision lists are learnable in both models, i.e. with far less information than it is assumed in previously used algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
D. Angluin and P. Laird. Learning from noisy examples. Machine Learning, 2(4):343–370, 1988.
D. Angluin. Inductive inference of formal languages from positive data. In-form. Control, 45(2):117–135, May 1980.
Shai Ben-David and Michael Lindenbaum. Learning distributions by their density levels:A paradigm for learning without a teacher. Journal of Com-puter and System Sciences, 55(1):171–182, August 1997.
R. Berwick. Learning from positive-only examples. In Machine Learning, Vol. II, pages 625–645. Morgan Kaufmann, 1986.
S.E. Decatur. Pac learning with constant-partition classification noise and applications to decision tree induction. In Proceedings of the Fourteenth In-ternational Conference on Machine Learning, 1997.
F. Denis. Pac learning from positive statistical queries. Technical report, L.I.F.L., 1998. full version: http://www.lifl.fr/~denis
E.M. Gold. Language identification in the limit. Inform. Control, 10:447–474, 1967.
D. Helmbold, R. Sloan, and M.K. Warmuth. Learning integer lattices. SIAM J. COMPUT., 21(2):240–266, 1992.
M. Kearns. Efficient noise-tolerant learning from statistical queries. In Pro-ceedings of the 25th ACM Symposium on the Theory of Computing, pages 392–401. ACM Press, New York, NY, 1993.
M.J. Kearns and U.V. Vazirani. An Introduction to Computational Learning Theory. MIT Press, 1994.
B.K. Natarajan. On learning boolean functions. In Proceedings of the 19th Annual ACM Symposium on Theory of Computing, pages 296–304. ACM Press, 1987.
B.K. Natarajan. Probably approximate learning of sets and functions. SIAM J. COMPUT., 20(2):328–351, 1991.
R.L. Rivest. Learning decision lists. Machine Learning, 2(3):229–246, 1987.
Takeshi Shinohara. Inductive inference from positive data is powerful. In Proceedings of the Third Annual Workshop on Computational Learning Theory, pages 97–110, Rochester, New York, 6–8 August 1990. ACM Press.
Haim Shvayster. A necessary condition for learning from positive examples. Machine Learning, 5:101–113, 1990.
L.G. Valiant. A theory of the learnable. Commun. ACM 27(11):1134–1142, November 1984.
T. Zeugmann and S. Lange. A guided tour across the boudaries of learning recursive languages. In Lectures Notes in Artificial Intelligence, editor, Algorithmic learning for knowledge-based systems, volume 961, pages 190–258. 1995.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Denis, F. (1998). PAC Learning from Positive Statistical Queries. In: Richter, M.M., Smith, C.H., Wiehagen, R., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 1998. Lecture Notes in Computer Science(), vol 1501. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49730-7_9
Download citation
DOI: https://doi.org/10.1007/3-540-49730-7_9
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-65013-3
Online ISBN: 978-3-540-49730-1
eBook Packages: Springer Book Archive