Skip to main content

Topological Separations in Inductive Inference

  • Conference paper
Algorithmic Learning Theory (ALT 2013)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8139))

Included in the following conference series:

  • 1539 Accesses

Abstract

A major question asked by learning in the limit from positive data is about what classes of languages are learnable with respect to a given learning criterion. We are particularly interested in the reasons for a class of languages to be unlearnable. We consider two types of reasons. One type is called topological (as an example, Gold has shown that no class containing an infinite language and all its finite sub-languages is learnable). Another reason is called computational (as the learners are required to be algorithmic). In particular, two learning criteria might allow for learning different classes of languages because of different topological restrictions, or because of different computational restrictions.

In this paper we formalize the idea of two learning criteria separating topologically in learning power. This allows us to study more closely why two learning criteria separate in learning power. For a variety of learning criteria (concerning Fex, monotone, iterative and feedback learning) we show that certain learning criteria separate topologically, and certain others, which are known to separate, are shown not to separate topologically. Showing that learning criteria do not separate topologically implies that any known separation must necessarily exploit some computational restrictions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Angluin, D.: Inductive inference of formal languages from positive data. Information and Control 45, 117–135 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  2. Blum, L., Blum, M.: Toward a mathematical theory of inductive inference. Information and Control 28, 125–155 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  3. Baliga, G., Case, J.: Learnability: Admissible, co-finite, and hypersimple sets. Journal of Computer and System Sciences 53, 26–32 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  4. Case, J.: The power of vacillation in language learning. SIAM Journal on Computing 28, 1941–1969 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  5. Case, J., Jain, S., Lange, S., Zeugmann, T.: Incremental concept learning for bounded data mining. Information and Computation 152, 74–110 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  6. Case, J., Lynes, C.: Machine inductive inference and language identification. In: Nielsen, M., Schmidt, E.M. (eds.) ICALP 1982. LNCS, vol. 140, pp. 107–115. Springer, Heidelberg (1982)

    Chapter  Google Scholar 

  7. Case, J., Moelius, S.: U-shaped, iterative, and iterative-with-counter learning. Machine Learning 72, 63–88 (2008)

    Article  Google Scholar 

  8. Case, J., Moelius, S.: Optimal language learning from positive data. Information and Computation 209, 1293–1311 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  9. de Jongh, D., Kanazawa, M.: Angluin’s thoerem for indexed families of r.e. sets and applications. In: Proc. of COLT (Computational Learning Theory), pp. 193–204 (1996)

    Google Scholar 

  10. Gold, E.: Language identification in the limit. Information and Control 10, 447–474 (1967)

    Article  MATH  Google Scholar 

  11. Heinz, J., Kasprzik, A., Kötzing, T.: Learning in the limit with lattice-structured hypothesis spaces. Theoretical Computer Science 457, 111–127 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  12. Jantke, K.: Monotonic and non-monotonic inductive inference of functions and patterns. In: Dix, J., Schmitt, P.H., Jantke, K.P. (eds.) NIL 1990. LNCS, vol. 543, pp. 161–177. Springer, Heidelberg (1991)

    Chapter  Google Scholar 

  13. Jech, T.: Set Theory. Academic Press, NY (1978)

    Google Scholar 

  14. Jain, S., Moelius, S., Zilles, S.: Learning without coding. Theoretical Computer Science 473, 124–148 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  15. Jain, S., Osherson, D., Royer, J., Sharma, A.: Systems that Learn: An Introduction to Learning Theory, 2nd edn. MIT Press, Cambridge (1999)

    Google Scholar 

  16. Kötzing, T.: Abstraction and Complexity in Computational Learning in the Limit. PhD thesis, University of Delaware (2009), http://pqdtopen.proquest.com/#viewpdf?dispub=3373055

  17. Kötzing, T.: Iterative learning from positive data and counters. In: Kivinen, J., Szepesvári, C., Ukkonen, E., Zeugmann, T. (eds.) ALT 2011. LNCS, vol. 6925, pp. 40–54. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  18. Kinber, E., Stephan, F.: Language learning from texts: Mind changes, limited memory and monotonicity. Information and Computation 123, 224–241 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  19. Lange, S., Zeugmann, T.: Monotonic versus non-monotonic language learning. In: Proc. of Nonmonotonic and Inductive Logic, pp. 254–269 (1993)

    Google Scholar 

  20. Lange, S., Zeugmann, T.: Incremental learning from positive data. Journal of Computer and System Sciences 53, 88–103 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  21. Osherson, D., Stob, M., Weinstein, S.: Note on a central lemma of learning theory. Journal of Mathematical Psychology 27, 86–92 (1983)

    Article  MATH  Google Scholar 

  22. Osherson, D., Stob, M., Weinstein, S.: Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists. MIT Press, Cambridge (1986)

    Google Scholar 

  23. Osherson, D., Weinstein, S.: Criteria of language learning. Information and Control 52, 123–138 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  24. Royer, J., Case, J.: Subrecursive Programming Systems: Complexity and Succinctness. Research Monograph in Progress in Theoretical Computer Science. Birkhäuser, Boston (1994)

    Book  MATH  Google Scholar 

  25. Rogers, H.: Theory of Recursive Functions and Effective Computability. McGraw Hill, New York (1987); reprinted by MIT Press, Cambridge (1987)

    Google Scholar 

  26. Wexler, K., Culicover, P.: Formal Principles of Language Acquisition. MIT Press, Cambridge (1980)

    Google Scholar 

  27. Wiehagen, R.: Limes-Erkennung rekursiver Funktionen durch spezielle Strategien. Elektronische Informationverarbeitung und Kybernetik 12, 93–99 (1976)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Case, J., Kötzing, T. (2013). Topological Separations in Inductive Inference. In: Jain, S., Munos, R., Stephan, F., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2013. Lecture Notes in Computer Science(), vol 8139. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40935-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-40935-6_10

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-40934-9

  • Online ISBN: 978-3-642-40935-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics