Abstract
A major question asked by learning in the limit from positive data is about what classes of languages are learnable with respect to a given learning criterion. We are particularly interested in the reasons for a class of languages to be unlearnable. We consider two types of reasons. One type is called topological (as an example, Gold has shown that no class containing an infinite language and all its finite sub-languages is learnable). Another reason is called computational (as the learners are required to be algorithmic). In particular, two learning criteria might allow for learning different classes of languages because of different topological restrictions, or because of different computational restrictions.
In this paper we formalize the idea of two learning criteria separating topologically in learning power. This allows us to study more closely why two learning criteria separate in learning power. For a variety of learning criteria (concerning Fex, monotone, iterative and feedback learning) we show that certain learning criteria separate topologically, and certain others, which are known to separate, are shown not to separate topologically. Showing that learning criteria do not separate topologically implies that any known separation must necessarily exploit some computational restrictions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Angluin, D.: Inductive inference of formal languages from positive data. Information and Control 45, 117–135 (1980)
Blum, L., Blum, M.: Toward a mathematical theory of inductive inference. Information and Control 28, 125–155 (1975)
Baliga, G., Case, J.: Learnability: Admissible, co-finite, and hypersimple sets. Journal of Computer and System Sciences 53, 26–32 (1996)
Case, J.: The power of vacillation in language learning. SIAM Journal on Computing 28, 1941–1969 (1999)
Case, J., Jain, S., Lange, S., Zeugmann, T.: Incremental concept learning for bounded data mining. Information and Computation 152, 74–110 (1999)
Case, J., Lynes, C.: Machine inductive inference and language identification. In: Nielsen, M., Schmidt, E.M. (eds.) ICALP 1982. LNCS, vol. 140, pp. 107–115. Springer, Heidelberg (1982)
Case, J., Moelius, S.: U-shaped, iterative, and iterative-with-counter learning. Machine Learning 72, 63–88 (2008)
Case, J., Moelius, S.: Optimal language learning from positive data. Information and Computation 209, 1293–1311 (2011)
de Jongh, D., Kanazawa, M.: Angluin’s thoerem for indexed families of r.e. sets and applications. In: Proc. of COLT (Computational Learning Theory), pp. 193–204 (1996)
Gold, E.: Language identification in the limit. Information and Control 10, 447–474 (1967)
Heinz, J., Kasprzik, A., Kötzing, T.: Learning in the limit with lattice-structured hypothesis spaces. Theoretical Computer Science 457, 111–127 (2012)
Jantke, K.: Monotonic and non-monotonic inductive inference of functions and patterns. In: Dix, J., Schmitt, P.H., Jantke, K.P. (eds.) NIL 1990. LNCS, vol. 543, pp. 161–177. Springer, Heidelberg (1991)
Jech, T.: Set Theory. Academic Press, NY (1978)
Jain, S., Moelius, S., Zilles, S.: Learning without coding. Theoretical Computer Science 473, 124–148 (2013)
Jain, S., Osherson, D., Royer, J., Sharma, A.: Systems that Learn: An Introduction to Learning Theory, 2nd edn. MIT Press, Cambridge (1999)
Kötzing, T.: Abstraction and Complexity in Computational Learning in the Limit. PhD thesis, University of Delaware (2009), http://pqdtopen.proquest.com/#viewpdf?dispub=3373055
Kötzing, T.: Iterative learning from positive data and counters. In: Kivinen, J., Szepesvári, C., Ukkonen, E., Zeugmann, T. (eds.) ALT 2011. LNCS, vol. 6925, pp. 40–54. Springer, Heidelberg (2011)
Kinber, E., Stephan, F.: Language learning from texts: Mind changes, limited memory and monotonicity. Information and Computation 123, 224–241 (1995)
Lange, S., Zeugmann, T.: Monotonic versus non-monotonic language learning. In: Proc. of Nonmonotonic and Inductive Logic, pp. 254–269 (1993)
Lange, S., Zeugmann, T.: Incremental learning from positive data. Journal of Computer and System Sciences 53, 88–103 (1996)
Osherson, D., Stob, M., Weinstein, S.: Note on a central lemma of learning theory. Journal of Mathematical Psychology 27, 86–92 (1983)
Osherson, D., Stob, M., Weinstein, S.: Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists. MIT Press, Cambridge (1986)
Osherson, D., Weinstein, S.: Criteria of language learning. Information and Control 52, 123–138 (1982)
Royer, J., Case, J.: Subrecursive Programming Systems: Complexity and Succinctness. Research Monograph in Progress in Theoretical Computer Science. Birkhäuser, Boston (1994)
Rogers, H.: Theory of Recursive Functions and Effective Computability. McGraw Hill, New York (1987); reprinted by MIT Press, Cambridge (1987)
Wexler, K., Culicover, P.: Formal Principles of Language Acquisition. MIT Press, Cambridge (1980)
Wiehagen, R.: Limes-Erkennung rekursiver Funktionen durch spezielle Strategien. Elektronische Informationverarbeitung und Kybernetik 12, 93–99 (1976)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Case, J., Kötzing, T. (2013). Topological Separations in Inductive Inference. In: Jain, S., Munos, R., Stephan, F., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2013. Lecture Notes in Computer Science(), vol 8139. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40935-6_10
Download citation
DOI: https://doi.org/10.1007/978-3-642-40935-6_10
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-40934-9
Online ISBN: 978-3-642-40935-6
eBook Packages: Computer ScienceComputer Science (R0)