Abstract
Learning in the limit deals mainly with the question of what can be learned, but not very often with the question of how fast. The purpose of this paper is to develop a learning model that stays very close to Gold’s model, but enables questions on the speed of convergence to be answered. In order to do this, we have to assume that positive examples are generated by some stochastic model. If the stochastic model is fixed (measure one learning), then all recursively enumerable sets are identifiable, while straying greatly from Gold’s model. In contrast, we define learning from random text as identifying a class of languages for every stochastic model where examples are generated independently and identically distributed. As it turns out, this model stays close to learning in the limit. We compare both models keeping several aspects in mind, particularly when restricted to several strategies and to the existence of locking sequences. Lastly, we present some results on the speed of convergence: In general, convergence can be arbitrarily slow, but for recursive learners, it cannot be slower than some magic function. Every language can be learned with exponentially small tail bounds, which are also the best possible. All results apply fully to Gold-style learners, since his model is a proper subset of learning from random text.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
D. Angluin. Finding patterns common to a set of strings. Journal of Computer and System Sciences, 21(1):46–62, 1980.
L. Blum and M. Blum. Toward a mathematical theory of inductive inference. Information and Control, 28:125–155, 1975.
T. Erlebach, P. Rossmanith, H. Stadtherr, A. Steger, and T. Zeugmann. Learning one-variable pattern languages very efficiently on average, in parallel, and by asking queries. In M. Li and A. Maruoka, editors, Proceedings of the 8th International Workshop on Algorithmic Learning Theory, number 1316 in Lecture Notes in Computer Science, pages 260–276. Springer-Verlag, October 1997.
M. A. Fulk. Prudence and other conditions on formal language learning. Information and Computation, 85:1–11, 1990.
E. M. Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.
S. Kapur and G. Bilardi. Language learning from stochastic input. In Proceedings of the 5th International Workshop on Computational Learning Theory, pages 303–310. ACM, 1992.
S. Kapur and G. Bilardi. Learning of indexed families from stochastic input. In The Australasian Theory Symposium (CATS’96), pages 162–167, Melbourne, Australia, January 1996.
D. Osherson, M. Stob, and S. Weinstein. Systems That Learn: An Introduction for Cognitive and Computer Scientists. MIT Press, Cambridge, Mass., 1986.
R. Reischuk and T. Zeugmann. A complete and tight average-case analysis of learning monomials. In C. Meinel and S. Tison, editors, Proceedings of the 16th Symposium on Theoretical Aspects of Computer Science, number 1563 in Lecture Notes in Computer Science, pages 414–423. Springer-Verlag, 1999.
P. Rossmanith and T. Zeugmann. Learning k-variable pattern languages efficiently stochastically finite on average from positive date. In V. Honavar and G. Slutzki, editors, Proceedings of the 4th International Colloquium on Grammatical Inference, number 1433 in Lecture Notes in Artificial Intelligence, pages 13–24, Ames, Iowa, jul 1998. Springer-Verlag.
G. Schäfer. Über Eingabeabhängigkeit und Komplexität von Inferenzstrategien. PhD thesis, Rheinisch Westfälische Technische Hochschule Aachen, 1984. In German.
K. Wexler and P. Culicover. Formal Principles of Language Acquisition. MIT Press, Cambridge, Mass., 1980.
T. Zeugmann. Lange and Wiehagen’s pattern learning algorithm: An average-case analysis with respect to its total learning time. Annals of Mathematics and Artificial Intelligence, 23(1–2):117–145, 1998.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Rossmanith, P. (1999). Learning from Random Text. In: Watanabe, O., Yokomori, T. (eds) Algorithmic Learning Theory. ALT 1999. Lecture Notes in Computer Science(), vol 1720. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46769-6_11
Download citation
DOI: https://doi.org/10.1007/3-540-46769-6_11
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-66748-3
Online ISBN: 978-3-540-46769-4
eBook Packages: Springer Book Archive