Abstract
We consider a learning model in which each element of a class of recursive functions is to be identified in the limit by a computable strategy. Given gradually growing initial segments of the graph of a function, the learner is supposed to generate a sequence of hypotheses converging to a correct hypothesis. The term correct means that the hypothesis is an index of the function to be learned in a given numbering. Restriction of the basic definition of learning in the limit yields several inference criteria, which have already been compared with respect to their learning power.
The scope of uniform learning is to synthesize appropriate identification strategies for infinitely many classes of recursive functions by a uniform method, i.e. a kind of meta-learning is considered. In this concept we can also compare the learning power of several inference criteria. If we fix a single numbering to be used as a hypothesis space for all classes of recursive functions, we obtain results similar to the non-uniform case. This hierarchy of inference criteria changes, if we admit different hypothesis spaces for different classes of functions. Interestingly, in uniform identification most of the inference criteria can be separated by collections of finite classes of recursive functions.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Barzdins, J. (1974); Inductive Inference of Automata, Functions and Programs, In: Proceedings International Congress of Mathematicians, 455–460.
Baliga, G.; Case, J.; Jain, S. (1996); Synthesizing Enumeration Techniques for Language Learning, In: Proceedings of the Ninth Annual Conference on Computational Learning Theory, ACM Press, 169–180
Case, J.; Smith, C. (1983); Comparison of Identification Criteria for Machine Inductive Inference, Theoretical Computer Science 25, 193–220.
Freivalds, R.; Kinber, E.B.; Wiehagen, R. (1995); How Inductive Inference Strategies Discover Their Errors, Information and Computation 118, 208–226.
Gold, E.M. (1967); Language Identification in the Limit, Information and Control 10, 447–474.
Jantke, K.P. (1979); Natural Properties of Strategies Identifying Recursive Functions, Elektronische Informationsverarbeitung und Kybernetik 15, 487–496.
Jantke, K.P.; Beick, H. (1981); Combining Postulates of Naturalness in Inductive Inference, Elektronische Informationsverarbeitung und Kybernetik 17, 465–484.
Kapur, S.; Bilardi, G. (1992); On Uniform Learnability of Language Families, Information Processing Letters 44, 35–38.
Osherson, D.N.; Stob, M.; Weinstein, S. (1988); Synthesizing Inductive Expertise, Information and Computation 77, 138–161.
Rogers, H. (1987); Theory of Recursive Functions and Effective Computability, MIT Press, Cambridge, Massachusetts.
Wiehagen, R. (1978); Zur Theorie der algorithmischen Erkennung, Dissertation B, Humboldt-University, Berlin (in German).
Zilles, S. (2000); On Uniform Learning of Classes of Recursive Functions, Technical Report LSA-2000-05E, Centre for Learning Systems and Applications, University of Kaiserslautern.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zilles, S. (2001). On the Comparison of Inductive Inference Criteria for Uniform Learning of Finite Classes. In: Abe, N., Khardon, R., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2001. Lecture Notes in Computer Science(), vol 2225. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45583-3_20
Download citation
DOI: https://doi.org/10.1007/3-540-45583-3_20
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42875-6
Online ISBN: 978-3-540-45583-7
eBook Packages: Springer Book Archive