Skip to main content
Log in

Resource Restricted Computability Theoretic Learning: Illustrative Topics and Problems

  • Published:
Theory of Computing Systems Aims and scope Submit manuscript

Abstract

Computability theoretic learning theory (machine inductive inference) typically involves learning programs for languages or functions from a stream of complete data about them and, importantly, allows mind changes as to conjectured programs. This theory takes into account algorithmicity but typically does not take into account feasibility of computational resources. This paper provides some example results and problems for three ways this theory can be constrained by computational feasibility. Considered are: the learner has memory limitations, the learned programs are desired to be optimal, and there are feasibility constraints on learning each output program as well as other constraints to minimize postponement tricks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Akama, Y., Zeugmann, T.: Consistent and coherent learning with δ-delay. Technical Report TCS-TR-A-07-29, Hokkaido Univ., October 2007

  2. Ambainis, A., Case, J., Jain, S., Suraj, M.: Parsimony hierarchies for inductive inference. J. Symb. Log. 69, 287–328 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  3. Angluin, D.: Learning regular sets from queries and counterexamples. Inf. Comput. 75(2), 87–106 (1987)

    Article  MATH  MathSciNet  Google Scholar 

  4. Ash, C., Knight, J.: Recursive structures and Eshov’s hierarchy. Math. Log. Q. 42, 461–468 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  5. Baliga, G., Case, J., Merkle, W., Stephan, F., Wiehagen, W.: When unlearning helps. Inf. Comput. (2007, accepted)

  6. Bārzdiņš, J.: Two theorems on the limiting synthesis of functions. In: Theory of Algorithms and Programs, vol. 210, pp. 82–88. Latvian State University, Riga (1974)

    Google Scholar 

  7. Bārzdiņš, J., Freivalds, R.: Prediction and limiting synthesis of recursively enumerable classes of functions. Latvijas Valsts Univ. Zinatn. Raksti 210, 101–111 (1974)

    Google Scholar 

  8. Becerra-Bonache, L., Case, J., Jain, S., Stephan, F.: Iterative learning of simple external contextual languages. In: 19th International Conference on Algorithmic Learning Theory (ALT’08), vol. 5254, pp. 359–373. Springer, Berlin (2008). Invited for submission to the associated Special Issue of TCS

    Chapter  Google Scholar 

  9. Bernstein, E., Vazirani, U.: Quantum complexity theory. SIAM J. Comput. 26, 1411–1473 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  10. Blum, M.: A machine independent theory of the complexity of recursive functions. J. ACM 14, 322–336 (1967)

    MATH  MathSciNet  Google Scholar 

  11. Carlucci, L., Case, J., Jain, S., Stephan, F.: Memory-limited U-shaped learning. Inf. Comput. 205, 1551–1573 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  12. Carlucci, L., Case, J., Jain, S., Stephan, F.: Non u-shaped vacillatory and team learning. J. Comput. Syst. Sci. 74, 409–430 (2008). Special issue in memory of Carl Smith

    Article  MATH  MathSciNet  Google Scholar 

  13. Case, J.: The power of vacillation in language learning. SIAM J. Comput. 28(6), 1941–1969 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  14. Case, J.: Directions for computability theory beyond pure mathematical. In: Gabbay, D., Goncharov, S., Zakharyaschev, M. (eds.) Mathematical Problems from Applied Logic II. New Logics for the XXIst Century. International Mathematical Series, vol. 5. Springer, Berlin (2007)

    Chapter  Google Scholar 

  15. Case, J., Jain, S.: Inductive inference (2007). http://www.cis.udel.edu/case/papers/ind-inf-ml.pdf; draft of article invited for C. Sammut’s upcoming, Encyclopedia of Machine Learning

  16. Case, J., Kötzing, T.: Dynamically delayed postdictive completeness and consistency in learning. In: 19th International Conference on Algorithmic Learning Theory (ALT’08). Lecture Notes in Artificial Intelligence, vol. 5254, pp. 389–403. Springer, Berlin (2008)

    Chapter  Google Scholar 

  17. Case, J., Lynes, C.: Machine inductive inference and language identification. In: Nielsen, M., Schmidt, E. (eds.) Proceedings of the 9th International Colloquium on Automata, Languages and Programming. Lecture Notes in Computer Science, vol. 140, pp. 107–115. Springer, Berlin (1982)

    Chapter  Google Scholar 

  18. Case, J., Moelius, S.: U-shaped, iterative, and iterative-with-counter learning. Mach. Learn. 72, 63–88 (2008). Special issue for selected papers from COLT’07

    Article  Google Scholar 

  19. Case, J., Smith, C.: Comparison of identification criteria for machine inductive inference. Theor. Comput. Sci. 25, 193–220 (1983)

    Article  MATH  MathSciNet  Google Scholar 

  20. Case, J., Jain, S., Ngo Manguelle, S.: Refinements of inductive inference by Popperian and reliable machines. Kybernetika 30, 23–52 (1994)

    MATH  MathSciNet  Google Scholar 

  21. Case, J., Jain, S., Lange, S., Zeugmann, T.: Incremental concept learning for bounded data mining. Inf. Comput. 152, 74–110 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  22. Case, J., Chen, K., Jain, S., Merkle, W., Royer, J.: Generality’s price: inescapable deficiencies in machine-learned programs. Ann. Pure Appl. Logic 139, 303–326 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  23. Case, J., Kötzing, T., Paddock, T.: Feasible iteration of feasible learning functionals. In: Hutter, M., Servedio, R., Takimoto, E. (eds.) 18th International Conference on Algorithmic Learning Theory (ALT’07). Lecture Notes in Artificial Intelligence, vol. 4754, pp. 26–40. Springer, Berlin (2007)

    Google Scholar 

  24. Cormen, T., Leiserson, C., Rivest, R., Stein, C.: Introduction to Algorithms, 2nd edn. MIT Press, Cambridge (2001)

    MATH  Google Scholar 

  25. Daley, R., Smith, C.: On the complexity of inductive inference. Inf. Control 69, 12–40 (1986)

    Article  MATH  MathSciNet  Google Scholar 

  26. Downey, R., Fellows, M.: Parameterized Complexity. Monographs in Computer Science. Springer, Berlin (1998)

    MATH  Google Scholar 

  27. Downey, R., Evans, P., Fellows, M.: Parameterized learning complexity. In Proceedings of the Sixth ACM Workshop on Computational Learning Theory (COLT’93), pp. 51–57 (1993)

  28. Ershov, Y.: A hierarchy of sets, I. Algebra Log. 7(1), 47–74 (1968) (in Russian). English translation in Algebra Log. 7, 25–43 (1968)

    MATH  Google Scholar 

  29. Ershov, Y.: A hierarchy of sets II. Algebra Log. 7, 212–232 (1968)

    Article  Google Scholar 

  30. Freivalds, R., Smith, C.: On the role of procrastination in machine learning. Inf. Comput. 107(2), 237–271 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  31. Fulk, M., Jain, S., Osherson, D.: Open problems in systems that learn. J. Comput. Syst. Sci. 49(3), 589–604 (1994)

    Article  MathSciNet  Google Scholar 

  32. Gold, E.: Language identification in the limit. Inf. Control 10, 447–474 (1967)

    Article  MATH  Google Scholar 

  33. Hartmanis, J., Stearns, R.: On the computational complexity of algorithms. Trans. Am. Math. Soc. 117, 285–306 (1965)

    Article  MATH  MathSciNet  Google Scholar 

  34. Hildebrand, F.: Introduction to Numerical Analysis. McGraw-Hill, New York (1956)

    MATH  Google Scholar 

  35. Hopcroft, J., Ullman, J.: Introduction to Automata Theory Languages and Computation. Addison-Wesley, Reading (1979)

    MATH  Google Scholar 

  36. Irwin, R., Kapron, B., Royer, J.: On characterizations of the basic feasible functional, Part I. J. Funct. Program. 11, 117–153 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  37. Jain, S., Sharma, A.: Elementary formal systems, intrinsic complexity, and procrastination. Inf. Comput. 132, 65–84 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  38. Jain, S., Osherson, D., Royer, J., Sharma, A.: Systems that Learn: An Introduction to Learning Theory. 2nd edn. MIT Press, Cambridge (1999)

    Google Scholar 

  39. Kapron, B., Cook, S.: A new characterization of type 2 feasibility. SIAM J. Comput. 25, 117–132 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  40. Kearns, M., Vazirani, U.: An Introduction to Computational Learning Theory. MIT Press, Cambridge (1994)

    Google Scholar 

  41. Kinber, E., Stephan, F.: Language learning from texts: mind changes, limited memory and monotonicity. Inf. Comput. 123, 224–241 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  42. Kuratowski, K., Mostowski, A.: Set Theory. North-Holland, Amsterdam (1967)

    Google Scholar 

  43. Lange, S., Zeugmann, T.: Incremental learning from positive data. J. Comput. Syst. Sci. 53, 88–103 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  44. Marcus, G., Pinker, S., Ullman, M., Hollander, M., Rosen, T.J., Xu, F.: Overregularization in Language Acquisition. Monographs of the Society for Research in Child Development, vol. 57(4). University of Chicago Press, Chicago (1992). Includes commentary by H. Clahsen

    Google Scholar 

  45. Mehlhorn, K.: Polynomial and abstract subrecursive classes. J. Comput. Syst. Sci. 12, 147–178 (1976)

    MATH  MathSciNet  Google Scholar 

  46. Meyer, A., Fischer, P.: Computational speed-up by effective operators. J. Symb. Log. 37, 48–68 (1972)

    Article  MathSciNet  Google Scholar 

  47. Odifreddi, P.: Classical Recursion Theory, vol. II. Elsevier, Amsterdam (1999)

    MATH  Google Scholar 

  48. Pitt, L.: Inductive inference, DFAs, and computational complexity. In: Analogical and Inductive Inference, Proceedings of the Second International Workshop (A’89). Lecture Notes in Artificial Intelligence, vol. 397, pp. 18–44. Springer, Berlin (1989)

    Google Scholar 

  49. Plunkett, K., Marchman, V.: U-shaped learning and frequency effects in a multi-layered perceptron: implications for child language acquisition. Cognition 38(1), 43–102 (1991)

    Article  Google Scholar 

  50. Reischuk, R., Zeugmann, T.: An average-case optimal one-variable pattern language learner. J. Comput. Syst. Sci. 60(2), 302–335 (2000). Special Issue for COLT’98

    Article  MATH  MathSciNet  Google Scholar 

  51. Rogers, H.: Theory of Recursive Functions and Effective Computability. McGraw-Hill, New York (1967). Reprinted, MIT Press, Cambridge (1987)

    MATH  Google Scholar 

  52. Royer, J., Case, J.: Subrecursive Programming Systems: Complexity and Succinctness. Progress in Theoretical Computer Science. Birkhäuser Boston, Cambridge (1994)

    MATH  Google Scholar 

  53. Sierpinski, W.: Cardinal and Ordinal Numbers, 2nd edn. PWN, Warsaw (1965).

    MATH  Google Scholar 

  54. Sipser, M.: Private communication (1978)

  55. Strauss, S., Stavy, R. (eds.): U-Shaped Behavioral Growth. Developmental Psychology Series. Academic Press, San Diego (1982)

    Google Scholar 

  56. Taatgen, N., Anderson, J.: Why do children learn to say broke? A model of learning the past tense without feedback. Cognition 86(2), 123–155 (2002)

    Article  Google Scholar 

  57. Valiant, L.: A theory of the learnable. Commun. ACM 27, 1134–1142 (1984)

    Article  MATH  Google Scholar 

  58. Wiehagen, R.: Limes-erkennung rekursiver funktionen durch spezielle strategien. Electronische Informationverarbeitung und Kybernetik 12, 93–99 (1976)

    MATH  MathSciNet  Google Scholar 

  59. Yoshinaka, R.: Learning efficiency of very simple grammars from positive data. In: Hutter, M., Servedio, R.A., Takimoto, E. (eds.) 18th International Conference on Algorithmic Learning Theory (ALT’07). Lecture Notes in Computer Science, vol. 4754, pp. 227–241. Springer, Berlin (2007)

    Chapter  Google Scholar 

  60. Zeugmann, T.: From learning in the limit to stochastic finite learning. Theor. Comput. Sci. 364, 77–97 (2006). Special Issue for ALT’96

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Case.

Additional information

Work supported in part by NSF Grant Number CCR-0208616 at UD.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Case, J. Resource Restricted Computability Theoretic Learning: Illustrative Topics and Problems. Theory Comput Syst 45, 773–786 (2009). https://doi.org/10.1007/s00224-009-9169-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00224-009-9169-7

Keywords

Navigation