Skip to main content

Comparing the Power of Probabilistic Learning and Oracle Identification Under Monotonicity Constraints

  • Conference paper
  • First Online:
Algorithmic Learning Theory (ALT 1998)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1501))

Included in the following conference series:

  • 372 Accesses

Abstract

In the setting of learning indexed families, probabilistic learning under monotonicity constraints is more powerful than deterministic learning under monotonicity constraints even if the probability is close to 1 provided the learning machines are restricted to proper or class preserving hypothesis spaces (cf. [19]). In this paper, we investigate the relation between probabilistic learning and oracle identification under monotonicity constraints. In particular, we deal with the question how much “additional information” provided by oracles is necessary in order to compensate the additional power of probabilistic learning.

If the oracle machines have access to K-oracle, then they can compensate the power of monotonic (conservative) probabilistic machines completely, provided the probability p is greater than 2/3 (1/2). Furthermore, we show that for every recursively enumerable oracle A, there exists a learning problem which is strong-monotonically learnable by an oracle machine having access to A, but not conservatively or monotonically learnable with any probability p < 0. A similar result holds for Peano-complete oracles. However, probabilistic learning under monotonicity constraints is ”rich“ enough to encode every recursively enumerable set in a characteristic learning problem, i.e., for every recursively enumerable set A, and every p < 2/3, there exists a learning problem la which is monotonically learnable with probability p, and monotonically learnable with oracle B if and only if A is Turing-reducible to B. The same result holds for conservative probabilistic learning with p < 1/2, and strong-monotonic learning with probability p = 2/3. In particular, it follows that probabilistic learning under monotonicity constraints cannot be characterized in terms of oracle identification. Moreover, we close an open problem that appeared in [19] by showing that the probabilistic hierarchies of class preserving monotonic and conservative probabilistic learning are dense.

Finally, we show that these probability bounds are strict, i.e., in the case of monotonic probabilistic learning with probability p = 2/3, conservative probabilistic learning with probability p = 1/2, and strong-monotonic probabilistic learning with probability p = 1/2, K is not suficient to compensate the power of probabilistic learning under monotonicity constraints.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. Angluin, Inductive Inference of formal languages from positive data, Information and Control 45 (1980) 117–135.

    Article  MATH  MathSciNet  Google Scholar 

  2. D. Angluin, C. Smith, Inductive inference: theory and methods, Comp. Survey 15,3 (1983) 237–269.

    Article  MathSciNet  Google Scholar 

  3. M. Blum, Machine independent theory of complexity of recursive functions, Journal of the ACM 14 (1967) 322–336.

    Article  MATH  MathSciNet  Google Scholar 

  4. J. Case, C. Smith, Comparison of Identification Criteria for Machine Inductive Inference, Theoretical Computer Science 25,2 (1983) 193–220.

    Article  MATH  MathSciNet  Google Scholar 

  5. R. Freivalds, Finite identification of general recursive functions by probabilistic strategies, in: Proc. of the Conf. on Fundamentals of Computation Theory (Akademie-Verlag, Berlin, 1979) 138–145.

    Google Scholar 

  6. L. Fortnow, M. Gasarch, S. Jain, E. B. Kinber, M. Kummer, S. Kurtz, M. Pleszkoch, T. Slaman, R. Solovay, F. Stephan, Extremes in the degrees in inferability, Ann. Pure Appl. Logic 66 (1994) 231–276.

    Article  MATH  MathSciNet  Google Scholar 

  7. Gasarch, W., Plezkoch, M., Learning via Queries to an oracle, in: Proc. 2th ACM Conf. on Comp. Learning Theory (ACM Press, Santa Cruz, July 1989) 175–188.

    Google Scholar 

  8. Gasarch, W., Smith, C., Learning via queries, Journal of the ACM 391 (1992), 649–675.

    Article  MATH  MathSciNet  Google Scholar 

  9. E.M. Gold, Language identification in the limit, Information and Control 10 (1967) 447–474.

    Article  MATH  Google Scholar 

  10. J. Hopcroft, J. Ullman, Introduction to Automata Theory Languages and Computation (Addison-Wesley Publ. Company, 1979).

    Google Scholar 

  11. S. Jain, A. Sharma, Probability is more powerful than team for language identification, in: Proc. 6th ACM Conf. on Comp. Learning Theory (ACM Press, Santa Cruz, July 1993) 192–198.

    Chapter  Google Scholar 

  12. S. Jain, A. Sharma, On monotonic strategies for learning r.e. languages, Annals of Mathematics and Artificial Intelligence (1994)

    Google Scholar 

  13. K.P. Jantke, Monotonic and non-monotonic inductive inference, New Generation Computing 8 349–360.

    Google Scholar 

  14. K.P. Jantke, Monotonic and non-monotonic inductive inference of functions an patterns, in: J. Dix, K.P. Jantke, P.H. Schmitt, eds., Proc. 1nd Int. Workshop on Nonmonotonic and Inductive Logics, Lecture Notes in AI 543 (Springer, Berlin, 1991) 161–177.

    Chapter  Google Scholar 

  15. M. Kummer, F. Stephan, On the structure of degrees of inferability, in: Journal of Computer and System Sciences 52 (1996) 214–238.

    Article  MathSciNet  Google Scholar 

  16. S. Lange, T. Zeugmann, Monotonic versus non-monotonic language learning, in: G. Brewka, K.P. Jantke, P.H. Schmitt, eds., Proc. 2nd Int. Workshop on Nonmonotonic and Inductive Logics, Lecture Notes in AI 659 (Springer, Berlin, 1993) 254–269.

    Chapter  Google Scholar 

  17. S. Lange, T. Zeugmann, Language learning in the dependence on the space of hypotheses, in: Proc. of the 6th ACM Conf. on Comp. Learning Theory (ACM Press, Santa Cruz, July 1993) 127–136.

    Chapter  Google Scholar 

  18. L. Meyer, Probabilistic language learning under monotonicity constraints, in: K.P. Jantke, T. Shinohara, T. Zeugmann, eds., Proc. of ALT’95, Lect. notes in AI 997 (Springer, Berlin, 1995) 169–185.

    Google Scholar 

  19. L. Meyer, Probabilistic language learning under monotonicity constraints, in: TCS 185 (1997) 81–128.

    Article  MATH  Google Scholar 

  20. L. Meyer, Aspects of complexity of conservative probabilistic learning, to appear (COLT’98).

    Google Scholar 

  21. L. Meyer, Aspects of complexity of probabilistic language learning (Institutsbericht, IIG, Freiburg, to appear).

    Google Scholar 

  22. P. Odifreddi, Classical Recursion Theory (North Holland, 1989).

    Google Scholar 

  23. D. Osherson, M. Stob, S. Weinstein, Systems that Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists (MIT Press, Cambridge MA, 1986).

    Google Scholar 

  24. L. Pitt, Probabilistic Inductive Inference, J. of the ACM 36,2 (1989) 383–433.

    Article  MATH  MathSciNet  Google Scholar 

  25. T. Slaman, R. Solovay, When oracles do not help, Proc. of the 4th ACM Conf. on Comp. Learning Theory (ACM Press, Santa Cruz, July 1991), 379–383.

    Google Scholar 

  26. R. Soare, Recursively Enumerable Sets and Degrees (Springer, 1987)

    Google Scholar 

  27. F. Stephan, Noisy inference and oracles, in: TCS 185 (1997) 129–157.

    Article  MATH  MathSciNet  Google Scholar 

  28. R. Wiehagen, A Thesis in Inductive Inference, in: J. Dix, K.P. Jantke, P.H. Schmitt, eds., Proc. First International Workshop on Nonmonotonic and Inductive Logic, Lecture Notes in Artificial Intelligence 534 (Springer, Berlin, 1990) 184–207.

    Google Scholar 

  29. R. Wiehagen, R. Freivalds, E.B. Kinber, On the Power of Probabilistic Strategies in Inductive Inference, Theoretical Computer Science 28 (1984), 111–133.

    Article  MATH  MathSciNet  Google Scholar 

  30. R. Wiehagen, R. Freivalds, E.B. Kinber, Probabilistic versus Deterministic Inductive Inference in Nonstandard Numberings, Zeitschr. f. math. Logik und Grundlagen d. Math. 34 (1988) 531–539.

    Article  MATH  MathSciNet  Google Scholar 

  31. T. Zeugmann, S. Lange, A Guided Tour Across the Boundaries of Learning Recursive Languages, in: K.P. Jantke and S. Lange, eds., Algorithmic Learning for Knowledge-Based Systems, Lecture Notes in Artificial Intelligence 961 (Springer, Berlin, 1995) 193–262.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Meyer, L. (1998). Comparing the Power of Probabilistic Learning and Oracle Identification Under Monotonicity Constraints. In: Richter, M.M., Smith, C.H., Wiehagen, R., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 1998. Lecture Notes in Computer Science(), vol 1501. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49730-7_23

Download citation

  • DOI: https://doi.org/10.1007/3-540-49730-7_23

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65013-3

  • Online ISBN: 978-3-540-49730-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics