Abstract
Learning theory is an active research area with contributions from various fields including artificial intelligence, theoretical computer science, and statistics. The main thrust is an attempt to model learning phenomena in precise ways and study the mathematical properties of these scenarios. In this way one hopes to get a better understanding of the learning scenarios and what is possible or as we call it learnable in each. Of course this goes with a study of algorithms that achieve the required performance. Learning theory aims to define reasonable models of phenomena and find provably successful algorithms within each such model. To complete the picture we also seek impossibility results showing that certain things are not learnable within a particular model, irrespective of the particular learning algorithms or methods being employed.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Dana Angluin. Queries and concept learning. Machine Learning, 2(4):319–342, 1988.
S. Arikawa, T. Shinohara, A. Yamamoto. Elementary formal systems as a unifying framework for language learning. In Proc. Second Annual Workshop on Computational Learning Theory, pages 312–327, Morgan Kaufmann, San Mateo, CA, 1989.
A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Occam’s razor. Inform. Proc. Lett., 24:377–380, 1987.
A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM, 36(4):929–965, 1989.
C. Cortes and V. N. Vapnik. Support-vector Networks, Machine Learning 20:273–297, 1995.
Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press, Cambridge, U.K., 2000.
T. G. Dietterich, R. H. Lathrop, and T. Lozano-Pérez. Solving the multipleinstance problem with axis-parallel rectangles. Artificial Intelligence, 89(1–2):31–71, 1997.
Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997.
E Mark Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.
Michael Kearns. Efficient noise-tolerant learning from statistical queries. In Journal of the ACM, 45(6):983–1006, 1998.
Michael Kearns and Yishay Mansour. On the boosting ability of top-down decision tree learning algorithms. Journal of Computer and System Sciences, 58(1):109–128, 1999.
Yishay Mansour and David McAllester. Boosting using branching programs. In Proc. 13th Annual Conference on Computational Learning Theory, pages 220–224. Morgan Kaufmann, San Francisco, 2000.
Yasuhito Mukouchi and Setsuo Arikawa. Towards a mathematical theory of machine discovery from facts. Theoretical Computer Science, 137(1):53–84, 1995.
J. R. Quinlan. Induction of decision trees. Machine Learning, 1:81–106, 1986.
Leslie G. Valiant. A theory of the learnable. Communications of the ACM, 27 (11):1134–1142, 1984.
V. Vovk. A game of prediction with expert advice. Journal of Computer and System Sciences, 36:153–173, 1998.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Abe, N., Khardon, R., Zeugmann, T. (2001). Editors’ Introduction. In: Abe, N., Khardon, R., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2001. Lecture Notes in Computer Science(), vol 2225. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45583-3_1
Download citation
DOI: https://doi.org/10.1007/3-540-45583-3_1
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42875-6
Online ISBN: 978-3-540-45583-7
eBook Packages: Springer Book Archive