Skip to main content

Learning with growing quality

  • Selected Papers
  • Conference paper
  • First Online:
Algorithmic Learning Theory (ALT 1993)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 744))

Included in the following conference series:

  • 119 Accesses

Abstract

Usually “quality” of learning grows with experience. Here is given a formalization of that phenomenon within a recursion theoretic framework. We consider the learning of total recursive functions by some algorithmic device (inductive inference machine) and describe the “quality” of learning in two different ways: as probability with which machine identifies the given function correctly, and as density of a set of arguments for which the hypothesis given by machine coincides with the identifiable function. We prove that in both cases there exist classes of sets of total recursive functions, such that for each of these sets the “quality” with which a learning device can identify an arbitrary function from the set grows with the number of other functions, which learning device are trying to identify at the same time, i.e., these classes are identifiable only with learning devices that show some improvement of learning capabilities with practice.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D.Angluin, W.Gasarch and C.Smith. Training sequences. Theoretical Computer Science 66 (1989), p.255–272.

    Google Scholar 

  2. D.Angluin and C.Smith. Inductive inference: theory and methods. ACM Computing Surveys 15 (1983), p.237–269.

    Article  Google Scholar 

  3. L.Blum and M.Blum. Toward a mathematical theory of inductive inference. Information and Control 28 (1975), p.125–155.

    Article  Google Scholar 

  4. J.Case and C.Smith. Comparison of identification criteria for machine inductive inference. Theoretical Computer Science 25 (1983), p. 193–220.

    Article  Google Scholar 

  5. W.Gasarch and C.Smith. On the inference of sequences of functions. Lecture Notes in Computer Science 265 (1987), p.23–41.

    Google Scholar 

  6. L.Pitt. Probabilistic inductive inference. Journal of the ACM 36 (1989), p.383–433.

    Article  Google Scholar 

  7. H.Rogers. Theory of Recursive Functions and Effective Computability. MIT Press, 1987.

    Google Scholar 

  8. J.Royer. Inductive inference of approximations. Information and Computation 70 (1986), p.156–178.

    Google Scholar 

  9. C.Smith. The power of pluralism for automatic program synthesis. Journal of the ACM 29 (1982), p.1144–1165.

    Article  Google Scholar 

  10. C.Smith and M.Velauthapillai. On the inference of approximate programs. Theoretical Computer Science 77 (1990), p.249–266.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Klaus P. Jantke Shigenobu Kobayashi Etsuji Tomita Takashi Yokomori

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Viksna, J. (1993). Learning with growing quality. In: Jantke, K.P., Kobayashi, S., Tomita, E., Yokomori, T. (eds) Algorithmic Learning Theory. ALT 1993. Lecture Notes in Computer Science, vol 744. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57370-4_45

Download citation

  • DOI: https://doi.org/10.1007/3-540-57370-4_45

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-57370-8

  • Online ISBN: 978-3-540-48096-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics