Skip to main content
Log in

A Solution to Wiehagen’s Thesis

  • Published:
Theory of Computing Systems Aims and scope Submit manuscript

Abstract

Wiehagen’s Thesis in Inductive Inference (1991) essentially states that, for each learning criterion, learning can be done in a normalized, enumerative way. The thesis was not a formal statement and thus did not allow for a formal proof, but support was given by examples of a number of different learning criteria that can be learned by enumeration. Building on recent formalizations of learning criteria, we are now able to formalize Wiehagen’s Thesis. We prove the thesis for a wide range of learning criteria, including many popular criteria from the literature. We also show the limitations of the thesis by giving four learning criteria for which the thesis does not hold (and, in two cases, was probably not meant to hold). Beyond the original formulation of the thesis, we also prove stronger versions which allow for many corollaries relating to strongly decisive and conservative learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. “Ex” stands for explanatory.

  2. We let \(\mathbb {N} = \{0,1,2,\ldots \}\) be the set of natural numbers and we fix a coding for programs based on Turing machines letting, for any program (code) \(p \in \mathbb {N}\), φ p be the function computed by the Turing machine coded to p.

  3. For a linear-time example, see [21, Section 2.3].

  4. h() denotes the initial conjecture (based on no data) made by h.

  5. The function pad was defined in Section 2.

  6. For convenience we write, for all a, z, φ(a, z) instead of φ a (z).

  7. The functions pad and unpad1 were defined in Section 2.

References

  1. Akama, Y., Zeugmann, T.: Consistent and coherent learning with δ-delay. Inf. Comput. 206, 1362–1374 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  2. Angluin, D.: Inductive inference of formal languages from positive data. Inf. Control 45, 117–135 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  3. Baliga, G., Case, J., Merkle, W., Stephan, F., Wiehagen, W.: When unlearning helps. Inf. Comput. 206, 694–709 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bārzdiņš, J.: Inductive inference of automata, functions and programs. In: Proceedings of the International Congress of Mathematicians, pp. 455–560. English translation in, American Mathematical Society Translations: Series 2 109, pp. 107–112, 1977 (1974)

  5. Beick, H.R.: Induktive Inferenz mit Höchster Inferenzgeschwindigkeit. Dissertation, Humboldt University of Berlin (1984)

  6. Blum, L., Blum, M.: Toward a mathematical theory of inductive inference. Inf. Control 28, 125–155 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  7. Case, J.: Periodicity in generations of automata. Math. Syst. Theory 8, 15–32 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  8. Case, J.: Infinitary self-reference in learning theory. J. Exp. Theor. Artif. Intell. 6, 3–16 (1994)

    Article  MATH  Google Scholar 

  9. Case, J., Kötzing, T.: Strongly non-U-shaped learning results by general techniques. In: Tauman Kalai, A., Mohri, M. (eds.) COLT 2010 - The 23rd Conference on Learning Theory, Haifa, Israel, June 27–29, 2010, pp. 181–193. Omnipress (2010)

  10. Case, J., Kȯtzing, T.: Learning secrets interactively. Dynamic modeling in inductive inference. Inf. Comput. 220, 60–73 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  11. Freivalds, R., Karpinski, M., Smith, C.H.: Co-learning of total recursive functions. In: Warmuth, M.K. (ed.) Proceedings of the Seventh Annual ACM Conference on Computational Learning Theory, COLT 1994, New Brunswick, NJ, USA, July 12–15, 1994, pp. 190–197. ACM (1994)

  12. Freivalds, R., Kinber, E., Wiehagen, R.: Inductive inference and computable one-one numberings. Z. Mathe. Logik Grundl. Math. 28, 463–479 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  13. Gold, E.: Language identification in the limit. Inf. Control 10, 447–474 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  14. Jain, S., Kinber, E., Lange, S., Wiehagen, R., Zeugmann, T.: Learning languages and functions by erasing. Theor. Comput. Sci. 241, 143–189 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  15. Jain, S., Osherson, D., Royer, J., Sharma, A.: Systems that Learn: An Introduction to Learning Theory, 2nd edn. MIT Press, Cambridge (1999)

    Google Scholar 

  16. Jantke, K.P.: Monotonic and non-monotonic inductive inference of functions and patterns. In: Dix, J., Jantke, K.P., Schmitt, P.H. (eds.) Nonmonotonic and Inductive Logic, 1st International Workshop, Karlsruhe, Germany, December 4–7, 1990, Proceedings, volume 543 of Lecture Notes in Computer Science, pp. 161–177. Springer (1991)

  17. Jantke, K.P., Beick, H.: Combining postulates of naturalness in inductive inference. Elektron. Inf. Verarb. Kybern. 17, 465–484 (1982)

    MathSciNet  MATH  Google Scholar 

  18. Kötzing, T.: Abstraction and Complexity in Computational Learning in the Limit. PhD thesis, University of Delaware. Available online at http://pqdtopen.proquest.com/#viewpdf?dispub=3373055 (2009)

  19. Kötzing, T.: A solution to Wiehagen’s thesis. In: Mayr, E.W., Portier, N. (eds.) 31st International Symposium on Theoretical Aspects of Computer Science (STACS 2014), STACS 2014, March 5–8, 2014, Lyon, France, volume 25 of LIPIcs, pp. 494–505. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik (2014)

  20. Rogers, H.: Theory of Recursive Functions and Effective Computability. McGraw Hill, New York (1967). Reprinted by MIT Press, Cambridge, 1987

    MATH  Google Scholar 

  21. Royer, J., Case, J.: Subrecursive Programming Systems: Complexity and Succinctness. Research monograph in Progress in Theor. Comput. Sci. Birkhauser̈, Boston (1994)

    Book  Google Scholar 

  22. Schäfer-Richter, G.: Über Eingabeabhängigkeit und Komplexität von Inferenzstrategien. Dissertation, RWTH Aachen (1984)

  23. Wiehagen, R.: Limes-Erkennung rekursiver Funktionen durch spezielle Strategien. Elektron. Inf. Verarb. Kybern. 12, 93–99 (1976)

    MathSciNet  MATH  Google Scholar 

  24. Wiehagen, R.: Zur Theorie der Algorithmischen Erkennung. Dissertation B. Humboldt University of Berlin (1978)

  25. Wiehagen, R.: A thesis in inductive inference. In: Dix, J., Jantke, K.P., Schmitt, P.H. (eds.) Nonmonotonic and Inductive Logic, 1st International Workshop, Karlsruhe, Germany, December 4–7, 1990, Proceedings, volume 543 of Lecture Notes in Computer Science, pp. 184–207. Springer (1991)

  26. Zeugmann, T., Zilles, S.: Learning recursive functions: a survey. Theor. Comput. Sci. 397, 4–56 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

I would like to thank Sandra Zilles for bringing Wiehagen’s Thesis in connection with the approach of abstractly defining learning criteria, as well as the anonymous reviewers of the conference version and the reviewers for the journal version for their helpful suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Timo Kötzing.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kötzing, T. A Solution to Wiehagen’s Thesis. Theory Comput Syst 60, 498–520 (2017). https://doi.org/10.1007/s00224-016-9678-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00224-016-9678-0

Keywords

Navigation