Skip to main content

Learning small programs with additional information

  • Conference paper
  • First Online:
Logical Foundations of Computer Science (LFCS 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1234))

Included in the following conference series:

  • 106 Accesses

Abstract

This paper was inspired by [FBW 94]. An arbitrary upper bound on the size of some program for the target function suffices for the learning of some program for this function. In [FBW 94] it was discovered that if “learning” is understood as “identification in the limit,” then in some programming languages it is possible to learn a program of size not exceeding the bound, while in some other programming languages this is not possible.

We have studied three other learning types, namely, “finite identification,” “co-learning” and “confidence-learning.” These three types are very different. Co-learning with the considered additional information in the form “an arbitrary upper bound for the size of the minimal program” allows the learning of the class of all recursive functions. “Finite identification” does not allow this. “Confidence-learning” is strong enough to learn the class of all recursive functions even without the additional information. However, the results of our paper show exactly the opposite rating for the capabilities of learning programs not exceeding the size given by the bound.

For finite identification it is still possible to identify small programs with additional information in some programming languages but not in all of them. For co-learning it is not possible in any programming language. These results contrast to the result in [FKS 94] showing that an arbitrary class of recursive functions is co-learnable if and only if it is identifiable in the limit. Finally, for confidence-learning it is in general not possible to identify small programs with additional information.

This work was facilitated by an international agreement under NSF Grants 9119540 and 9421640

Supported by Latvian Science Council Grant No.96.0282

Supported by Latvian Science Council Grant No.96.0282

Supported in part by NSF Grants 9020079 and 9301339

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dana Angluin, and Carl H. Smith. Inductive inference: Theory and methods, Computing Surveys, v. 15, 1983, pp. 237–269.

    Google Scholar 

  2. Jānis Bārzdins, Rüsiņš Freivalds, and Carl H. Smith. Learning with confidence, Lecture Notes in Computer Science, v.1046, 1996, pp. 207–218.

    Google Scholar 

  3. Rüsiņš Freivalds. Minimal Gödel numbers and their identification in the limit. Lecture Notes in Computer Science, v. 32, 1975, pp. 219–225.

    Google Scholar 

  4. Rüsiņš Freivalds. Effective operations and functionals computable in the limit. Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, v. 24, 1978, pp. 193–206 (in Russian)

    Google Scholar 

  5. Rüsiņš Freivalds, Ognian Botuscharov, and Rolf Wiehagen. Identifying nearly minimal Gödel numbers from additional information. Lecture Notes in Computer Science, v. 872, 1994, pp. 91–99.

    Google Scholar 

  6. Rüsiņš Freivalds, Marek Karpinski, and Carl H. Smith. Co-learning of total recursive functions. In Proceedings of the Seventh Annual Conference on Computational Learning Theory, New Brunswick, New Jersey, pp. 190–197. ACM Press, July 1994.

    Google Scholar 

  7. Rüsiņš Freivalds, and Carl H. Smith. The role of procrastination in machine learning. Information and Computation, vol. 107, 1993, pp. 237–271.

    Google Scholar 

  8. Rüsiņš Freivalds, and Rolf Wiehagen. Inductive inference with additional information. Journal of Information Processing and Cybernetics, v. 15, 1979, pp. 179–185.

    Google Scholar 

  9. E. M. Gold. Language identification in the limit. Information and Control, v. 10, 1967, pp. 447–474.

    Google Scholar 

  10. Daniel N. Osherson, Michael Stob, and Scott Weinstein. Systems that Learn. MIT Press, 1986.

    Google Scholar 

  11. Hartley Rogers Jr. Theory of Recursive Functions and Effective Computability. McGraw-Hill, 1967.

    Google Scholar 

  12. Rolf Wiehagen, and Thomas Zeugmann. Learning and consistency. Lecture Notes in Artificial Intelligence, v. 961, 1995, pp. 1–24.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Sergei Adian Anil Nerode

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Freivalds, R., Tervits, G., Wiehagen, R., Smith, C. (1997). Learning small programs with additional information. In: Adian, S., Nerode, A. (eds) Logical Foundations of Computer Science. LFCS 1997. Lecture Notes in Computer Science, vol 1234. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-63045-7_11

Download citation

  • DOI: https://doi.org/10.1007/3-540-63045-7_11

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63045-6

  • Online ISBN: 978-3-540-69065-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics