Skip to main content

Monotonic and dual-monotonic probabilistic language learning of indexed families with high probability

  • Conference paper
  • First Online:
Computational Learning Theory (EuroCOLT 1997)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1208))

Included in the following conference series:

Abstract

The present paper deals with monotonic and dual-monotonic probabilistic identification of indexed families of uniformly recursive languages from positive data. In particular, we consider the special case where the probability is equal to 1.

Earlier results in the field of probabilistic identification established that — considering function identification — each collection of recursive functions identifiable with probability p>1/2 is deterministically identifiable (cf. [23]). In the case of language learning from text, each collection of recursive languages identifiable from text with probability p>2/3 is deterministically identifiable (cf. [20]). In particular, we have no gain of learning power when the collections of functions or languages are claimed to be inferred with probability p=1.

As shown in [18], we receive high structured probabilistic hierarchies when dealing with probabilistic learning under monotonicity constraints. In this paper, we consider monotonic and dual monotonic probabilistic learning of indexed families with respect to proper, class preserving and class comprising hypothesis spaces. In particular, we can prove for proper monotonic as well as for proper dual monotonic learning that probabilistic learning is more powerful than deterministic learning even if the probability is claimed to be 1. To establish this result, we need a sophisticated version of the proof technique developed in [17].

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. A. Ambainis, Probabilistic and Team PFIN-type Learning: General Properties, in Proc. 9th ACM Conf. on Comp. Learning Theory (ACM Press, Desenzano del Garda, 1996) 157–168.

    Google Scholar 

  2. D. Angluin, Inductive Inference of formal languages from positive data, Information and Control 45 (1980) 117–135.

    Google Scholar 

  3. M. Blum, Machine independent theory of complexity of recursive functions, Journal of the ACM 14 (1967) 322–336.

    Google Scholar 

  4. R. Daley, B. Kalyanasundaram, Use of reduction arguments in determining Popperian FIN-type learning capabilities, in: Proc of the 3th Int. Workshop on Algorithmic Learning Theory, Lecture Notes in Computer Science 744 (Springer, Berlin, 1993) 173–186.

    Google Scholar 

  5. R. Daley, B. Kalyanasundaram, M. Velauthapillai, The power of probabilism in Popperian FINite learning, Proc. of AII, Lecture Notes in Computer Science 642 (Springer, Berlin, 1992) 151–169.

    Google Scholar 

  6. R. Freivalds, Finite identification of general recursive functions by probabilistic strategies, in: Proc. of the Conf. on Fundamentals of Computation Theory (Akademie-Verlag, Berlin, 1979) 138–145.

    Google Scholar 

  7. E.M. Gold, Language identification in the limit, Information and Control 10 (1967) 447–474.

    Google Scholar 

  8. J. Hopcroft, J. Ullman, Introduction to Automata Theory Languages and Computation (Addison-Wesley Publ. Company, 1979).

    Google Scholar 

  9. S. Jain, A. Sharma, Probability is more powerful than team for language identification, in: Proc. 6th ACM Conf. on Comp. Learning Theory (ACM Press, Santa Cruz, July 1993) 192–198.

    Google Scholar 

  10. S. Jain, A. Sharma, On monotonic strategies for learning r.e. languages, Annals of Mathematics and Artificial Intelligence (1994, to appear).

    Google Scholar 

  11. K.P. Jantke, Monotonic and non-monotonic inductive inference, New Generation Computing 8, 349–360.

    Google Scholar 

  12. S. Kapur, Monotonic Language Learning, in: S. Doshita, K. Furukawa, K.P. Jantke, eds., Proc. on ALT'92, Lecture Notes in AI 743 (Springer, Berlin, 1992) 147–158.

    Google Scholar 

  13. S. Lange, T. Zeugmann, Types of monotonic language learning an their characterisation, in: Proc. 5th ACM Conf. on Comp. Learning Theory, (ACM Press, Pittsburgh, 1992) 377–390.

    Google Scholar 

  14. S. Lange, T. Zeugmann, Monotonic versus non-monotonic language learning, in: G. Brewka, K.P. Jantke, P.H. Schmitt, eds., Proc. 2nd Int. Workshop on Nonmonotonic and Inductive Logics, Lecture Notes in AI 659 (Springer, Berlin, 1993) 254–269.

    Google Scholar 

  15. S. Lange, T. Zeugmann, Language learning in the dependence on the space of hypotheses, in: Proc. of the 6th ACM Conf. on Comp. Learning Theory (ACM Press, Santa Cruz, July 1993), 127–136.

    Google Scholar 

  16. S. Lange, T. Zeugmann, S. Kapur, Monotonic and Dual Monotonic Language Learning, Theoretical Computer Science 155 (1996) 365–410.

    Google Scholar 

  17. L. Meyer, Probabilistic language learning under monotonicity constraints, in: K.P. Jantke, T. Shinohara, T. Zeugmann, eds., Proc. of ALT'95, Lect. notes in AI 997 (Springer, Berlin, 1995), 169–185.

    Google Scholar 

  18. L. Meyer, Probabilistic learning of indexed families under monotonicity constraints, Theoretical Computer Science, Special Issue Alt'95, to appear.

    Google Scholar 

  19. D. Osherson, M. Stob, S. Weinstein, Systems that Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists (MIT Press, Cambridge MA, 1986).

    Google Scholar 

  20. L. Pitt, Probabilistic Inductive Inference, J. of the ACM 36, 2 (1989) 383–433.

    Google Scholar 

  21. L. Valiant, A Theory of the Learnable, Comm. of the ACM 27, 11 (1984) 1134–1142.

    Google Scholar 

  22. R. Wiehagen, A Thesis in Inductive Inference, in: J. Dix, K.P. Jantke, P.H. Schmitt, eds., Proc. First International Workshop on Nonmonotonic and Inductive Logic, Lecture Notes in Artificial Intelligence 534 (Springer, Berlin, 1990) 184–207.

    Google Scholar 

  23. R. Wiehagen, R. Freivalds, E.B. Kinber, On the Power of Probabilistic Strategies in Inductive Inference, Theoretical Computer Science 28 (1984), 111–133.

    Google Scholar 

  24. R. Wiehagen, R. Freivalds, E.B. Kinber, Probabilistic versus Deterministic Inductive Inference in Nonstandard Numberings, Zeitschr. f. math. Logik und Grundlagen d. Math. 34 (1988) 531–539.

    Google Scholar 

  25. T. Zeugmann, S. Lange, A Guided Tour Across the Boundaries of Learning Recursive Languages, in: K.P. Jantke and S. Lange, eds., Algorithmic Learning for Knowledge-Based Systems, Lecture Notes in Artificial Intelligence 961 (Springer, Berlin, 1995) 193–262.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Shai Ben-David

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Meyer, L. (1997). Monotonic and dual-monotonic probabilistic language learning of indexed families with high probability. In: Ben-David, S. (eds) Computational Learning Theory. EuroCOLT 1997. Lecture Notes in Computer Science, vol 1208. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-62685-9_7

Download citation

  • DOI: https://doi.org/10.1007/3-540-62685-9_7

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-62685-5

  • Online ISBN: 978-3-540-68431-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics