Abstract
Computability theoretic learning theory (machine inductive inference) typically involves learning programs for languages or functions from a stream of complete data about them and, importantly, allows mind changes as to conjectured programs. This theory takes into account algorithmicity but typically does not take into account feasibility of computational resources. This paper provides some example results and problems for three ways this theory can be constrained by computational feasibility. Considered are: the learner has memory limitations, the learned programs are desired to be optimal, and there are feasibility constraints on learning each output program as well as other constraints to minimize postponement tricks.
Similar content being viewed by others
References
Akama, Y., Zeugmann, T.: Consistent and coherent learning with δ-delay. Technical Report TCS-TR-A-07-29, Hokkaido Univ., October 2007
Ambainis, A., Case, J., Jain, S., Suraj, M.: Parsimony hierarchies for inductive inference. J. Symb. Log. 69, 287–328 (2004)
Angluin, D.: Learning regular sets from queries and counterexamples. Inf. Comput. 75(2), 87–106 (1987)
Ash, C., Knight, J.: Recursive structures and Eshov’s hierarchy. Math. Log. Q. 42, 461–468 (1996)
Baliga, G., Case, J., Merkle, W., Stephan, F., Wiehagen, W.: When unlearning helps. Inf. Comput. (2007, accepted)
Bārzdiņš, J.: Two theorems on the limiting synthesis of functions. In: Theory of Algorithms and Programs, vol. 210, pp. 82–88. Latvian State University, Riga (1974)
Bārzdiņš, J., Freivalds, R.: Prediction and limiting synthesis of recursively enumerable classes of functions. Latvijas Valsts Univ. Zinatn. Raksti 210, 101–111 (1974)
Becerra-Bonache, L., Case, J., Jain, S., Stephan, F.: Iterative learning of simple external contextual languages. In: 19th International Conference on Algorithmic Learning Theory (ALT’08), vol. 5254, pp. 359–373. Springer, Berlin (2008). Invited for submission to the associated Special Issue of TCS
Bernstein, E., Vazirani, U.: Quantum complexity theory. SIAM J. Comput. 26, 1411–1473 (1997)
Blum, M.: A machine independent theory of the complexity of recursive functions. J. ACM 14, 322–336 (1967)
Carlucci, L., Case, J., Jain, S., Stephan, F.: Memory-limited U-shaped learning. Inf. Comput. 205, 1551–1573 (2007)
Carlucci, L., Case, J., Jain, S., Stephan, F.: Non u-shaped vacillatory and team learning. J. Comput. Syst. Sci. 74, 409–430 (2008). Special issue in memory of Carl Smith
Case, J.: The power of vacillation in language learning. SIAM J. Comput. 28(6), 1941–1969 (1999)
Case, J.: Directions for computability theory beyond pure mathematical. In: Gabbay, D., Goncharov, S., Zakharyaschev, M. (eds.) Mathematical Problems from Applied Logic II. New Logics for the XXIst Century. International Mathematical Series, vol. 5. Springer, Berlin (2007)
Case, J., Jain, S.: Inductive inference (2007). http://www.cis.udel.edu/case/papers/ind-inf-ml.pdf; draft of article invited for C. Sammut’s upcoming, Encyclopedia of Machine Learning
Case, J., Kötzing, T.: Dynamically delayed postdictive completeness and consistency in learning. In: 19th International Conference on Algorithmic Learning Theory (ALT’08). Lecture Notes in Artificial Intelligence, vol. 5254, pp. 389–403. Springer, Berlin (2008)
Case, J., Lynes, C.: Machine inductive inference and language identification. In: Nielsen, M., Schmidt, E. (eds.) Proceedings of the 9th International Colloquium on Automata, Languages and Programming. Lecture Notes in Computer Science, vol. 140, pp. 107–115. Springer, Berlin (1982)
Case, J., Moelius, S.: U-shaped, iterative, and iterative-with-counter learning. Mach. Learn. 72, 63–88 (2008). Special issue for selected papers from COLT’07
Case, J., Smith, C.: Comparison of identification criteria for machine inductive inference. Theor. Comput. Sci. 25, 193–220 (1983)
Case, J., Jain, S., Ngo Manguelle, S.: Refinements of inductive inference by Popperian and reliable machines. Kybernetika 30, 23–52 (1994)
Case, J., Jain, S., Lange, S., Zeugmann, T.: Incremental concept learning for bounded data mining. Inf. Comput. 152, 74–110 (1999)
Case, J., Chen, K., Jain, S., Merkle, W., Royer, J.: Generality’s price: inescapable deficiencies in machine-learned programs. Ann. Pure Appl. Logic 139, 303–326 (2006)
Case, J., Kötzing, T., Paddock, T.: Feasible iteration of feasible learning functionals. In: Hutter, M., Servedio, R., Takimoto, E. (eds.) 18th International Conference on Algorithmic Learning Theory (ALT’07). Lecture Notes in Artificial Intelligence, vol. 4754, pp. 26–40. Springer, Berlin (2007)
Cormen, T., Leiserson, C., Rivest, R., Stein, C.: Introduction to Algorithms, 2nd edn. MIT Press, Cambridge (2001)
Daley, R., Smith, C.: On the complexity of inductive inference. Inf. Control 69, 12–40 (1986)
Downey, R., Fellows, M.: Parameterized Complexity. Monographs in Computer Science. Springer, Berlin (1998)
Downey, R., Evans, P., Fellows, M.: Parameterized learning complexity. In Proceedings of the Sixth ACM Workshop on Computational Learning Theory (COLT’93), pp. 51–57 (1993)
Ershov, Y.: A hierarchy of sets, I. Algebra Log. 7(1), 47–74 (1968) (in Russian). English translation in Algebra Log. 7, 25–43 (1968)
Ershov, Y.: A hierarchy of sets II. Algebra Log. 7, 212–232 (1968)
Freivalds, R., Smith, C.: On the role of procrastination in machine learning. Inf. Comput. 107(2), 237–271 (1993)
Fulk, M., Jain, S., Osherson, D.: Open problems in systems that learn. J. Comput. Syst. Sci. 49(3), 589–604 (1994)
Gold, E.: Language identification in the limit. Inf. Control 10, 447–474 (1967)
Hartmanis, J., Stearns, R.: On the computational complexity of algorithms. Trans. Am. Math. Soc. 117, 285–306 (1965)
Hildebrand, F.: Introduction to Numerical Analysis. McGraw-Hill, New York (1956)
Hopcroft, J., Ullman, J.: Introduction to Automata Theory Languages and Computation. Addison-Wesley, Reading (1979)
Irwin, R., Kapron, B., Royer, J.: On characterizations of the basic feasible functional, Part I. J. Funct. Program. 11, 117–153 (2001)
Jain, S., Sharma, A.: Elementary formal systems, intrinsic complexity, and procrastination. Inf. Comput. 132, 65–84 (1997)
Jain, S., Osherson, D., Royer, J., Sharma, A.: Systems that Learn: An Introduction to Learning Theory. 2nd edn. MIT Press, Cambridge (1999)
Kapron, B., Cook, S.: A new characterization of type 2 feasibility. SIAM J. Comput. 25, 117–132 (1996)
Kearns, M., Vazirani, U.: An Introduction to Computational Learning Theory. MIT Press, Cambridge (1994)
Kinber, E., Stephan, F.: Language learning from texts: mind changes, limited memory and monotonicity. Inf. Comput. 123, 224–241 (1995)
Kuratowski, K., Mostowski, A.: Set Theory. North-Holland, Amsterdam (1967)
Lange, S., Zeugmann, T.: Incremental learning from positive data. J. Comput. Syst. Sci. 53, 88–103 (1996)
Marcus, G., Pinker, S., Ullman, M., Hollander, M., Rosen, T.J., Xu, F.: Overregularization in Language Acquisition. Monographs of the Society for Research in Child Development, vol. 57(4). University of Chicago Press, Chicago (1992). Includes commentary by H. Clahsen
Mehlhorn, K.: Polynomial and abstract subrecursive classes. J. Comput. Syst. Sci. 12, 147–178 (1976)
Meyer, A., Fischer, P.: Computational speed-up by effective operators. J. Symb. Log. 37, 48–68 (1972)
Odifreddi, P.: Classical Recursion Theory, vol. II. Elsevier, Amsterdam (1999)
Pitt, L.: Inductive inference, DFAs, and computational complexity. In: Analogical and Inductive Inference, Proceedings of the Second International Workshop (A’89). Lecture Notes in Artificial Intelligence, vol. 397, pp. 18–44. Springer, Berlin (1989)
Plunkett, K., Marchman, V.: U-shaped learning and frequency effects in a multi-layered perceptron: implications for child language acquisition. Cognition 38(1), 43–102 (1991)
Reischuk, R., Zeugmann, T.: An average-case optimal one-variable pattern language learner. J. Comput. Syst. Sci. 60(2), 302–335 (2000). Special Issue for COLT’98
Rogers, H.: Theory of Recursive Functions and Effective Computability. McGraw-Hill, New York (1967). Reprinted, MIT Press, Cambridge (1987)
Royer, J., Case, J.: Subrecursive Programming Systems: Complexity and Succinctness. Progress in Theoretical Computer Science. Birkhäuser Boston, Cambridge (1994)
Sierpinski, W.: Cardinal and Ordinal Numbers, 2nd edn. PWN, Warsaw (1965).
Sipser, M.: Private communication (1978)
Strauss, S., Stavy, R. (eds.): U-Shaped Behavioral Growth. Developmental Psychology Series. Academic Press, San Diego (1982)
Taatgen, N., Anderson, J.: Why do children learn to say broke? A model of learning the past tense without feedback. Cognition 86(2), 123–155 (2002)
Valiant, L.: A theory of the learnable. Commun. ACM 27, 1134–1142 (1984)
Wiehagen, R.: Limes-erkennung rekursiver funktionen durch spezielle strategien. Electronische Informationverarbeitung und Kybernetik 12, 93–99 (1976)
Yoshinaka, R.: Learning efficiency of very simple grammars from positive data. In: Hutter, M., Servedio, R.A., Takimoto, E. (eds.) 18th International Conference on Algorithmic Learning Theory (ALT’07). Lecture Notes in Computer Science, vol. 4754, pp. 227–241. Springer, Berlin (2007)
Zeugmann, T.: From learning in the limit to stochastic finite learning. Theor. Comput. Sci. 364, 77–97 (2006). Special Issue for ALT’96
Author information
Authors and Affiliations
Corresponding author
Additional information
Work supported in part by NSF Grant Number CCR-0208616 at UD.
Rights and permissions
About this article
Cite this article
Case, J. Resource Restricted Computability Theoretic Learning: Illustrative Topics and Problems. Theory Comput Syst 45, 773–786 (2009). https://doi.org/10.1007/s00224-009-9169-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00224-009-9169-7