Skip to main content
Log in

Generalization and specialization strategies for learning r.e. languages

  • Published:
Annals of Mathematics and Artificial Intelligence Aims and scope Submit manuscript

Abstract

Overgeneralization is a major issue in the identification of grammars for formal languages from positive data. Different formulations of generalization and specialization strategies have been proposed to address this problem, and recently there has been a flurry of activity investigating such strategies in the context of indexed families of recursive languages. The present paper studies the power of these strategies to learn recursively enumerable languages from positive data. In particular, the power of strong-monotonic, monotonic, and weak-monotonic (together with their dual notions modeling specialization) strategies are investigated for identification of r.e. languages. These investigations turn out to be different from the previous investigations on learning indexed families of recursive languages and at times require new proof techniques. A complete picture is provided for the relative power of each of the strategies considered. An interesting consequence is that the power of weak-monotonic strategies is equivalent to that of conservative strategies. This result parallels the scenario for indexed classes of recursive languages. It is also shown that any identifiable collection of r.e. languages can also be identified by a strategy that exhibits the dual of weak-monotonic property. An immediate consequence of the proof of this result is that if attention is restricted to infinite r.e. languages, then conservative strategies can identify every identifiable collection.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. D. Angluin, Inductive inference of formal languages from positive data, Information and Control 45 (1980) 117–135.

    Article  MATH  MathSciNet  Google Scholar 

  2. L. Blum and M. Blum, Toward a mathematical theory of inductive inference, Information and Control 28 (1975) 125–155.

    Article  MATH  MathSciNet  Google Scholar 

  3. M. Blum, A machine independent theory of the complexity of recursive functions, Journal of the ACM 14 (1967) 322–336.

    Article  MATH  MathSciNet  Google Scholar 

  4. M. Fulk, Prudence and other conditions on formal language learning, Information and Computation 85 (1990) 1–11.

    Article  MATH  MathSciNet  Google Scholar 

  5. E.M. Gold, Language identification in the limit, Information and Control 10 (1967) 447–474.

    Article  MATH  Google Scholar 

  6. J. Hopcroft and J. Ullman, Introduction to Automata Theory, Languages, and Computation (Addison-Wesley, Reading, MA, 1979).

    MATH  Google Scholar 

  7. S. Jain and A. Sharma, Recursion theoretic characterizations of language learning, Technical Report 281, University of Rochester (1989).

  8. S. Jain and A. Sharma, Characterizing language learning by standardizing operations, Journal of Computer and System Sciences 49(1) (1994) 96–107.

    Article  MATH  MathSciNet  Google Scholar 

  9. S. Jain and A. Sharma, Characterizing language learning in terms of computable numberings, Annals of Pure and Applied Logic 84(1) (1997) 51–72.

    Article  MATH  MathSciNet  Google Scholar 

  10. K.P. Jantke, Monotonic and non-monotonic inductive inference, New Generation Computing 8 (1991) 349–360.

    Article  MATH  Google Scholar 

  11. S. Kapur, Monotonic language learning, in: Proceedings of the Third Workshop on Algorithmic Learning Theory (JSAI Press, 1992). Proceedings reprinted as Lecture Notes in Artificial Intelligence, Springer-Verlag.

  12. S. Kapur, Uniform characterizations of various kinds of language learning, in: Proceedings of the Fourth International Workshop on Algorithmic Learning Theory, Lecture Notes in Artificial Intelligence 744 (Springer, Berlin, 1993).

    Google Scholar 

  13. S. Kapur and G. Bilardi, Language learning without overgeneralization, in: Proceedings of the Ninth Annual Symposium on Theoretical Aspects of Computer Science, Lecture Notes in Computer Science 577 (Springer, New York, 1992).

    Google Scholar 

  14. E. Kinber, Monotonicity versus efficiency for learning languages from texts, Technical Report 94–22, Department of Computer and Information Sciences, University of Delaware (1994).

  15. E. Kinber and F. Stephan, Language learning from texts: Mind changes, limited memory and monotonicity, Information and Computation 123(2) (1995) 224–241.

    Article  MATH  MathSciNet  Google Scholar 

  16. S. Lange and T. Zeugmann, Monotonic language learning on informant, Technical Report 11/92, GOSLER-Report, FB Mathematik und Informatik, TH Lepzig (1992).

    Google Scholar 

  17. S. Lange and T. Zeugmann, Types of monotonic language learning and their characterization, in: Proceedings of the Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, PA (ACM Press, 1992) pp. 377–390.

  18. S. Lange and T. Zeugmann, Language learning with bounded number of mind changes, in: Proceedings of the Tenth Annual Symposium on Theoretical Aspects of Computer Science, Lecture Notes Computer Science 665 (Springer, New York, 1993) pp. 682–691.

    Google Scholar 

  19. S. Lange and T. Zeugmann, Monotonic versus non-monotonic language learning, in: Proceedings of the Second International Workshop on Nonmonotonic and Inductive Logic, Lecture Notes in Artificial Intelligence 659 (Springer, Berlin, 1993) pp. 254–269.

    Google Scholar 

  20. S. Lange and T. Zeugmann, On the impact of order independence to the learnability of recursive languages, Technical Report ISIS-RR–93–17E, Institute for Social Information Science Research Report, Fujitsu Laboratories Ltd. (1993).

  21. S. Lange, T. Zeugmann and S. Kapur, Class preserving monotonic language learning, Technical Report 14/92, GOSLER-Report, FB Mathematik und Informatik, TH Leipzig (1992).

    Google Scholar 

  22. S. Lange, T. Zeugmann and S. Kapur, Monotonic and dual monotonic language learning, Theoretical Computer Science A 155 (1996) 365–410.

    Article  MATH  MathSciNet  Google Scholar 

  23. M. Machtey and P. Young, An Introduction to the General Theory of Algorithms (North-Holland, New York, 1978).

    MATH  Google Scholar 

  24. S. Muggleton and C. Feng, Efficient induction of logic programs, in: Proceedings of the First Conference on Algorithmic Learning Theory (Ohmsa Publishers, Tokyo, 1990) pp. 368–381. Reprinted by Ohmsa Springer-Verlag.

    Google Scholar 

  25. Y. Mukouchi, Characterization of finite identification, in: Proceedings of the Third International Workshop on Analogical and Inductive Inference, Dagstuhl Castle, Germany, ed. K.P. Jantke (October 1992) pp. 260–267.

  26. Y. Mukouchi, Inductive inference with bounded mind changes, in: Proceedings of the Third Workshop on Algorithmic Learning Theory (JSAI Press, 1992) pp. 125–134. Proceedings reprinted as Lecture Notes in Artificial Intelligence, Springer-Verlag.

  27. Y. Mukouchi and S. Arikawa, Inductive inference machines that can refute hypothesis spaces, in: Proceedings of the Fourth International Workshop on Algorithmic Learning Theory, Lecture Notes in Artificial Intelligence 744 (Springer, Berlin, 1993) pp. 123–136.

    Google Scholar 

  28. D. Osherson, M. Stob and S. Weinstein, Learning strategies, Information and Control 53 (1982) 32–51.

    Article  MATH  MathSciNet  Google Scholar 

  29. D. Osherson, M. Stob and S. Weinstein, Systems that Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists (MIT Press, Cambridge, MA, 1986).

    Google Scholar 

  30. J.R. Quinlan, Learning logical definitions from relations, Machine Learning 5(3) (1990) 239–266.

    Google Scholar 

  31. H. Rogers, Gödel numberings of partial recursive functions, Journal of Symbolic Logic 23 (1958) 331–341.

    Article  MathSciNet  Google Scholar 

  32. H. Rogers, Theory of Recursive Functions and Effective Computability (McGraw Hill, New York, 1967). Reprinted, MIT Press, 1987.

    MATH  Google Scholar 

  33. E. Shapiro, Inductive inference of theories from facts, Technical Report 192, Computer Science Department, Yale University (1981).

  34. R. Wiehagen, A thesis in inductive inference, in: Nonmonotonic and Inductive Logic, 1st International Workshop, Karlsruhe, Germany, Lecture Notes in Artificial Intelligence 543, eds. P. Schmitt, J. Dix and K. Jantke (Springer, Berlin, 1990) pp. 184–207.

    Google Scholar 

  35. R. Wiehagen and W. Liepe, Charakteristische Eigenschaften von erkennbaren Klassen rekursiver Funktionen, Electronische Informationverarbeitung und Kybernetik 12 (1976) 421–438 (in German).

    MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Jain, S., Sharma, A. Generalization and specialization strategies for learning r.e. languages. Annals of Mathematics and Artificial Intelligence 23, 1–26 (1998). https://doi.org/10.1023/A:1018903922049

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1018903922049

Keywords

Navigation