Abstract
Learnability of families of recursive languages from positive data is studied in the Gold paradigm of inductive inference. A large amount of work has focused on trying to understand how language learning ability of a learner is affected when it is constrained in various ways. For example, motivated by work in inductive logic, different notions of monotonicity have been studied which variously reflect the requirement that the learner's guess must monotonically ‘improve’ with regard to the target language. Various types of combinations of constraints such as monotonicity are defined and their relationships explored. Under one version of a disjunctive combination of a set of constraints, learning is considered successful as long as on any presentation of a language at least one of the constraints in the set is satisfied. It is also shown that a conjunctive combination of certain monotonicity constraints is less powerful than the set-theoretic intersection of the classes corresponding to the individual constraints.
The author would like to thank the anonymous reviewers for some useful suggestions
This is a preview of subscription content, log in via an institution.
Preview
Unable to display preview. Download preview PDF.
References
Dana Angluin. Finding patterns common to a set of strings. Journal of Computer System Sciences, 21:46–62, 1980.
Dana Angluin. Inductive inference of formal languages from positive data. Information and Control, 45:117–135, 1980.
Dana Angluin and Carl H. Smith. Inductive inference: theory and methods. Computing Surveys, 15(3):237–269, September 1983.
Dana Angluin and Carl H. Smith. Inductive inference. In S. C. Shapiro, editor, Encyclopedia of Artificial Intelligence, volume 1. Wiley-Interscience Publication, New York, 1987.
Robert Berwick. The Acquisition of Syntactic Knowledge. MIT press, Cambridge, MA, 1985.
John Case and C. Lynes. Machine inductive inference and language identification. In Proceedings of the International Colloquium on Automata, Languages and Programming (ICALP), pages 107–115, 1982.
John Case and Carl H. Smith. Comparison of identification criteria for machine inductive inference. Theoretical Computer Science, 25:193–220, 1983.
E. M. Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.
J. Hopcroft and J. Ullman. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, N. Reading, MA, 1979.
Klaus P. Jantke. Monotonic and non-monotonic inductive inference. New Generation Computing, 8:349–360, 1991.
Shyam Kapur. Computational Learning of Languages. PhD thesis, Cornell University, September 1991. Computer Science Department Technical Report 91-1234.
Shyam Kapur. Monotonic language learning. In S. Doshita, K. Furukawa, K. P. Jantke and T. Nishida, editors, Proceedings of the 3rd Workshop on Algorithmic Learning Theory, volume 743, pages 147–158, Berlin, October 1992. Springer-Verlag. Lecture Notes in Artificial Intelligence.
Shyam Kapur. How much of what? Is this what underlies parameter setting? In Eve V. Clark, editor, Proceedings of the 25th Annual Child Language Research Forum, pages 50–59, 1993. Also in Cognition. (To appear.).
Shyam Kapur. Uniform characterizations of various kinds of language learning. In K. P. Jantke, S. Kobayashi, E. Tomita, and T. Yokomori, editors, Proceedings of the fourth Workshop on Algorithmic Learning Theory, volume 744, pages 197–208. Springer-Verlag, November 1993. Lecture Notes in Artificial Intelligence.
Shyam Kapur, Barbara Lust, Wayne Harbert, and Gita Martohardjono. Universal grammar and learnability theory: the case of binding domains and the subset principle. In Knowledge and Language: Issues in Representation and Acquisition. Kluwer Academic Publishers, 1993.
Steffen Lange and Thomas Zeugmann. Monotonic versus non-monotonic language learning. In G. Brewka, K. P. Jantke, and P. H. Schmitt, editors, Proceedings of the 2nd International Workshop on Nonmonotonic and Inductive Logic (Lecture Notes in Artificial Intelligence Volume 659), pages 254–269. Springer-Verlag, 1991.
Steffen Lange and Thomas Zeugmann. Language learning in dependence on the space of hypotheses. In Proceedings of the 6th Annual ACM Conference on Computational Learning Theory, pages 127–136. Morgan-Kaufman, 1993.
Steffen Lange and Thomas Zeugmann. Learning recursive languages with bounded mind changes. International Journal of Foundations of Computer Science, 4:157–178, 1994.
Steffen Lange, Thomas Zeugmann, and Shyam Kapur. Class preserving monotonic and dual monotonic language learning. Technical Report GOSLER-14/92, FB Mathematik und Informatik, TH Leipzig, August 1992. Also to appear in Theoretical Computer Science.
M. Rita Manzini and Kenneth Wexler. Parameters, binding theory and learnability. Linguistic Inquiry, 18:413–444, 1987.
Yasuhito Mukouchi. Inductive inference with bounded mind changes. In S. Doshita, K. Furukawa, K. P. Jantke and T. Nishida, editors, Proceedings of the 3rd Workshop on Algorithmic Learning Theory, volume 743, pages 125–134, Berlin, 1992. Springer-Verlag. Lecture Notes in Artificial Intelligence.
Yasuhito Mukouchi. Inductive Inference of Recursive Concepts. PhD thesis, Research Institute of Fundamental Information Science, Kyushu University, March 1994. RIFIS-TR-CS-82.
Yasuhito Mukouchi and Setsuo Arikawa. Inductive inference machines that can refute hypothesis spaces. In K. P. Jantke, S. Kobayashi, E. Tomita, and T. Yokomori, editors, Proceedings of the Fourth Workshop on Algorithmic Learning Theory (Springer-Verlag Lecture Notes in Artificial Intelligence Series), 1993.
E. Y. Shapiro. Inductive inference of theories from facts. Technical Report 192, Department of Computer Science, Yale University, 1981.
Robert Irving Soare. Recursively enumerable sets and degrees: a study of computable functions and computably generated sets. Springer-Verlag, Berlin; New York, 1987.
R. Solomonoff. A formal theory of inductive inference. Information and Control, 7:1–22 and 234–254, 1964.
Rolf Wiehagen. A thesis in inductive inference. In J. Dix, K. P. Jantke, and P. H. Schmitt, editors, Proceedings of the 1st International Workshop on Nonmonotonic and Inductive Logic, pages 184–207. Springer-Verlag, 1991. Lecture Notes in Artificial Intelligence Vol. 543.
Thomas Zeugmann, Steffen Lange, and Shyam Kapur. Characterizations of class preserving monotonic and dual monotonic language learning. Technical Report IRCS-92-24, Institute for Research in Cognitive Science, University of Pennsylvania, September 1992. Also to appear in Information and Computation.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1994 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kapur, S. (1994). Language learning under various types of constraint combinations. In: Arikawa, S., Jantke, K.P. (eds) Algorithmic Learning Theory. AII ALT 1994 1994. Lecture Notes in Computer Science, vol 872. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-58520-6_77
Download citation
DOI: https://doi.org/10.1007/3-540-58520-6_77
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-58520-6
Online ISBN: 978-3-540-49030-2
eBook Packages: Springer Book Archive