Skip to main content

Stochastic Finite Learning

  • Reference work entry
Encyclopedia of Machine Learning
  • 97 Accesses

Motivation and Background

Assume that we are given a concept class \(\mathcal{C}\) and should design a learner for it. Next, suppose we already know or could prove \(\mathcal{C}\) not to be learnable in the model of PAC Learning. But it can be shown that \(\mathcal{C}\) is learnable within Gold’s (1967) model of Inductive Inference or learning in the limit. Thus, we can design a learner behaving as follows. When fed any of the data sequences allowed in this model, it converges in the limit to a hypothesis correctly describing the target concept. Nothing more is known. Let M be any fixed learner. If (d n ) n ≥ 0 is any data sequence, then the stage of convergence is the least integer m such that M(d m ) = M(d n ) for all nm, provided such an nexists (and infinite, otherwise). In general, it is undecidable whether or not the learner has already reached the stage of convergence, but even if it is decidable for a particular concept class, it may be practically infeasible to do so....

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Recommended Reading

  • Angluin, D. (1980a). Finding patterns common to a set of strings. Journal of Computer and System Sciences, 21(1), 46–62.

    Article  MathSciNet  MATH  Google Scholar 

  • Angluin, D. (1980b). Inductive inference of formal languages from positive data. Information Control, 45(2), 117–135.

    Article  MathSciNet  MATH  Google Scholar 

  • Blumer, A., Ehrenfeucht, A., Haussler, D., & Warmuth, M. K. (1989). Learnability and the Vapnik–Chervonenkis dimension. Journal of the ACM, 36(4), 929–965.

    Article  MathSciNet  MATH  Google Scholar 

  • Erlebach, T., Rossmanith, P., Stadtherr, H., Steger, A., & Zeugmann, T. (2001). Learning one-variable pattern languages very efficiently on average, in parallel, and by asking queries. Theoretical Computer Science, 261(1), 119–156.

    Article  MathSciNet  MATH  Google Scholar 

  • Gold, E. M. (1967). Language identification in the limit. Information and Control, 10(5), 447–474.

    Article  MATH  Google Scholar 

  • Haussler, D. (1987). Bias, version spaces and Valiant’s learning framework. In P. Langley (Ed.), Proceedings of the fourth international workshop on machine learning (pp. 324–336). San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  • Haussler, D., Kearns, M., Littlestone, N., & Warmuth, M. K. (1991). Equivalence of models for polynomial learnability. Information and Computation, 95(2), 129–161.

    Article  MathSciNet  MATH  Google Scholar 

  • Lange, S., & Wiehagen, R. (1991). Polynomial-time inference of arbitrary pattern languages. New Generation Computing, 8(4), 361–370.

    Article  MATH  Google Scholar 

  • Lange, S., & Zeugmann, T. (1996). Set-driven and rearrangement-independent learning of recursive languages. Mathematical Systems Theory, 29(6), 599–634.

    MathSciNet  MATH  Google Scholar 

  • Mitchell, A., Scheffer, T., Sharma, A., & Stephan, F. (1999). The VC-dimension of sub- classes of pattern languages. In O. Watanabe & T. Yokomori (Eds.), Algorithmic learning theory, tenth international conference, ALT”99, Tokyo, Japan, December 1999, Proceedings, Lecture notes in artificial intelligence (Vol. 1720, pp. 93–105). Springer.

    Google Scholar 

  • Reischuk, R., & Zeugmann, T. (2000). An average-case optimal one-variable pattern language learner. Journal of Computer and System Sciences, 60(2), 302–335.

    Article  MathSciNet  MATH  Google Scholar 

  • Rossmanith, P., & Zeugmann, T. (2001). Stochastic finite learning of the pattern languages. Machine Learning, 44(1/2), 67–91.

    Article  MATH  Google Scholar 

  • Saly, A., Goldman, M. J. K., & Schapire, R. E. (1993). Exact identification of circuits using fixed points of amplification functions. SIAM Journal of Computing, 22(4), 705–726.

    Article  Google Scholar 

  • Valiant, L. G. (1984). A theory of the learnable. Communications of the ACM, 27(11), 1134–1142.

    Article  MATH  Google Scholar 

  • Zeugmann, T. (1998). Lange and Wiehagen’s pattern language learning algorithm: An average-case analysis with respect to its total learning time. Annals of Mathematics and Artificial Intelligence, 23, 117–145.

    Article  MathSciNet  MATH  Google Scholar 

  • Zeugmann, T. (2006). From learning in the limit to stochastic finite learning. Theoretical Computer Science, 364(1), 77–97. Special issue for ALT 2003.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this entry

Cite this entry

Zeugmann, T. (2011). Stochastic Finite Learning. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-30164-8_787

Download citation

Publish with us

Policies and ethics