Elsevier

Theoretical Computer Science

Volume 620, 21 March 2016, Pages 33-45
Theoretical Computer Science

Topological separations in inductive inference

https://doi.org/10.1016/j.tcs.2015.10.036Get rights and content
Under an Elsevier user license
open archive

Abstract

Re learning in the limit from positive data, a major concern is which classes of languages are learnable with respect to a given learning criterion. We are particularly interested herein in the reasons for a class of languages to be unlearnable. We consider two types of reasons. One type is called topological where it does not help if the learners are allowed to be uncomputable (an example of Gold's is that no class containing an infinite language and all its finite sub-languages is learnable — even by an uncomputable learner). Another reason is called computational (where the learners are required to be algorithmic). In particular, two learning criteria might allow for learning different classes of languages from one another — but with dependence on whether the unlearnability is of type topological or computational.

In this paper we formalize the idea of two learning criteria separating topologically in learning power. This allows us to study more closely why two learning criteria separate in learning power. For a variety of learning criteria, concerning vacillatory, monotone, (several kinds of) iterative and feedback learning, we show that certain learning criteria separate topologically, and certain others, which are known to separate, are shown not to separate topologically. Showing that learning criteria do not separate topologically implies that any known separation must necessarily exploit algorithmicity of the learner.

Keywords

Inductive inference
Language learning
Non-computable learning
Topological

Cited by (0)