Abstract
Computability theorists have extensively studied sets A the elements of which can be enumerated by Turing machines. These sets, also called computably enumerable sets, can be identified with their Gödel codes. Although each Turing machine has a unique Gödel code, different Turing machines can enumerate the same set. Thus, knowing a computably enumerable set means knowing one of its infinitely many Gödel codes. In the approach to learning theory stemming from E.M. Gold’s seminal paper [9], an inductive inference learner for a computably enumerable set A is a system or a device, usually algorithmic, which when successively (one by one) fed data for A outputs a sequence of Gödel codes (one by one) that at a certain point stabilize at codes correct for A. The convergence is called semantic or behaviorally correct, unless the same code for A is eventually output, in which case it is also called syntactic or explanatory. There are classes of sets that are semantically inferable, but not syntactically inferable.
Here, we are also concerned with generalizing inductive inference from sets, which are collections of distinct elements that are mutually independent, to mathematical structures in which various elements may be interrelated. This study was recently initiated by F. Stephan and Yu. Ventsov. For example, they systematically investigated inductive inference of the ideals of computable rings. With F. Stephan we continued this line of research by studying inductive inference of computably enumerable vector subspaces and other closure systems.
In particular, we showed how different convergence criteria interact with different ways of supplying data to the learner. Positive data for a set A are its elements, while negative data for A are the elements of its complement. Inference fromtext means that only positive data are supplied to the learner. Moreover, in the limit, all positive data are given. Inference fromswitching means that changes from positive to negative data or vice versa are allowed, but if there are only finitely many such changes, then in the limit all data of the eventually requested type (either positive or negative) are supplied. Inference from an informant means that positive and negative data are supplied alternately, but in the limit all data are supplied. For sets, inference from switching is more restrictive than inference from an informant, but more powerful than inference from text. On the other hand, for example, the class of computably enumerable vector spaces over an infinite field, which is syntactically inferable from text does not change if we allow semantic convergence, or inference from switching, but not both at the same time. While many classes of inferable algebraic structures have nice algebraic characterizations when learning from text or from switching is considered, we do not know of such characterizations for learning from an informant.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Angluin, D. (1980). “Inductive Inference of Formal Languages from Positive Data”, Information and Control 45, 117–135.
Baliga, G., Case, J. and Jain, S. (1995). “Language Learning with Some Negative Information”, Journal of Computer and System Sciences 51, 273–285.
Blum, L. and Blum, M. (1975). “Toward a Mathematical Theory of Inductive Inference”, Information and Control 28, 125–155.
Case, J. and Lynes, C. (1982). “Machine Inductive Inference and Language Identification”, in Nielsen, M. and Schmidt, E.M. [18], 107–115.
Case, J. and Smith, C. (1983). “Comparison of Identification Criteria for Machine Inductive Inference”, Theoretical Computer Science 25, 193–220.
Cesa-Bianchi, N., Numao, M. and Reischuk, R. (eds.) (2002). Algorithmic Learning Theory: 13th International Conference, Lecture Notes in Artificial Intelligence 2533, Berlin: Springer-Verlag.
Downey, R.G. and Remmel, J.B. (1998). “Computable Algebras and Closure Systems: Coding Properties”, in Ershov, Yu.L., Goncharov, S.S., Nerode, A. and Remmel, J.B. [8], 977–1039.
Ershov, Yu.L., Goncharov, S.S., Nerode, A. and Remmel, J.B. (eds.) (1998). Handbook of Recursive Mathematics 2, Amsterdam: Elsevier.
Gold, E.M. (1967). “Language Identification in the Limit”, Information and Control 10, 447–474.
Griffor, E.R. (ed.) (1999). Handbook of Computability Theory, Amsterdam: Elsevier.
Harizanov, V.S. and Stephan, F. (2002). “On the Learnability of Vector Spaces”, in Cesa-Bianchi, N., Numao, M. and Reischuk, R. [6], 233–247.
Jain, S. and Stephan, F. (2003). “Learning by Switching Type of Information”, Information and Computation 185, 89–104.
Jain, S., Osherson, D.N., Royer, J.S. and Sharma, A. (1999). Systems That Learn: An Introduction to Learning Theory, 2nd ed., Cambridge (Mass.): MIT Press.
Kalantari, I. and Retzlaff, A. (1977). “Maximal Vector Spaces Under Automorphisms of the Lattice of Recursively Enumerable Vector Spaces”, Journal of Symbolic Logic 42, 481–491.
Kaplansky, I. (1974). Commutative Rings, Chicago: The University of Chicago Press.
Metakides, G. and Nerode, A. (1977). “Recursively Enumerable Vector Spaces”, Annals of Mathematical Logic 11, 147–171.
Motoki, T. (1991). “Inductive Inference from All Positive and Some Negative Data”, Information Processing Letters 39, 177–182.
Nielsen, M. and Schmidt, E.M. (eds.) (1982). Automata, Languages and Programming: Proceedings of the 9th International Colloquium, Lecture Notes in Computer Science 140, Berlin: Springer-Verlag.
Odifreddi, P. (1989). Classical Recursion Theory, Amsterdam: North-Holland.
Osherson, D.N. and Weinstein, S. (1982). “Criteria of Language Learning”, Information and Control 52, 123–138.
Osherson, D.N., Stob, M. and Weinstein, S. (1986). Systems That Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists, Cambridge (Mass.): MIT Press.
Sharma, A. (1998). “A Note on Batch and Incremental Learnability”, Journal of Computer and System Sciences 56, 272–276.
Soare, R.I. (1987). Recursively Enumerable Sets and Degrees. A Study of Computable Functions and Computably Generated Sets, Berlin: Springer-Verlag.
Stephan, F. and Ventsov, Yu. (2001). “Learning Algebraic Structures from Text” , Theoretical Computer Science 268, 221–273.
Stoltenberg-Hansen, V. and Tucker, J.V. (1999). “Computable Rings and Fields”, in Griffor, E.R. [10], 363–447.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2007 Springer
About this chapter
Cite this chapter
Harizanov, V.S. (2007). Inductive Inference Systems for Learning Classes of Algorithmically Generated Sets and Structures. In: Friend, M., Goethe, N.B., Harizanov, V.S. (eds) Induction, Algorithmic Learning Theory, and Philosophy. Logic, Epistemology, and the Unity of Science, vol 9. Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-6127-1_2
Download citation
DOI: https://doi.org/10.1007/978-1-4020-6127-1_2
Publisher Name: Springer, Dordrecht
Print ISBN: 978-1-4020-6126-4
Online ISBN: 978-1-4020-6127-1
eBook Packages: Humanities, Social Sciences and LawPhilosophy and Religion (R0)