Skip to main content

Towards realistic theories of learning

  • Invited Talks
  • Conference paper
  • First Online:
Algorithmic Learning Theory (AII 1994, ALT 1994)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 872))

  • 135 Accesses

Abstract

In computational learning theory continuous efforts are made to formulate models of machine learning that are more realistic than previously available models. Two of the most popular models that have been recently proposed, Valiant's PAC learning model and Angluin's query learning model, can be thought of as refinements of preceding models such as Gold's classic pradigm of identification in the limit, in which the question of how fast the learning can take place is emphasized. A considerable amount of results have been obtained within these two frameworks, resolving the learnability questions of many important classes of functions and languages. These two particular learning models are by no means comprehensive, and many important aspects of learning are not directly addressed in these models. Aiming towards more realistic theories of learning, many new models and extensions of existing learning models that attempt to formalize such aspects have been developed recently. In this paper, we will review some of these new extensions and models in computational learning theory, concentrating in particular on those proposed and studied by researchers at Theory NEC Laboratory RWCP, and their colleagues at other institutions.

Real World Computing Partnership

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. N. Abe. Feasible learnability of formal grammars and the theory of natural language acquisition. In Proceedings of COLING-88, August 1988.

    Google Scholar 

  2. D. Angluin and M. Kharitonov. When won't membership queries help? In Proc. of the 23rd Symposium on Theory of Computing, pages 444–454. ACM Press, New York, NY, 1991.

    Google Scholar 

  3. N. Abe and H. Mamitsuka. A new method for predicting protein secondary structures based on stochastic tree grammars. In Proceedings of the Eleventh International Conference on Machine Learning, 1994.

    Google Scholar 

  4. N. Abe and J. Takeuchi. The ‘lob-pass’ problem and an on-line learning model of rational choice. In Proceedings of the Sixth Annual ACM Workshop on Computational Learning Theory. Morgan Kaufmann, San Mateo, California, August 1993.

    Google Scholar 

  5. N. Abe and M. K. Warmuth. On the computational complexity of approximating probability distributions by probabilistic automata. Machine Learning, 9 (2/3), 1992. A special issue for COLT'90.

    Google Scholar 

  6. M. Brown, R. Hughey, A. Krogh, I. S. Mian, K. Sjolander, and D. Haussler. Using dirichlet mixture priors to derive hidden markov models for protein families. In Proceedings of the First International Conference on Intelligent Systems for Molecular Biology, pages 47–55, 1993.

    Google Scholar 

  7. S. Goldman, M. Kearns, and R. Schapire. On the sample complexity of weak learning. In Proceedings of the 1990 Workshop on Computational Learning Theory. Morgan Kaufmann, San Mateo, California, August 1990.

    Google Scholar 

  8. D. Haussler. Decision theoretic generalizations of the PAC model for neural net and other learning applications. Information and Computation, 100 (1), September 1992.

    Google Scholar 

  9. R. Herrnstein. Rational choice theory. American Psychologist, 45(3):356–367, 1990.

    Google Scholar 

  10. M. Kearns and S. Seung. Learning from a population of hypotheses. In Proceedings of the Sixth Annual ACM Workshop on Computational Learning Theory. Morgan Kaufmann, San Mateo, California, August 1993.

    Google Scholar 

  11. M. Kearns and R. Schapire. Efficient distribution-free learning of probabilistic concepts. Journal of Computer and System Sciences, 48 (3), June 1994. A special issue on 31st IEEE Conference on Foundations of Computer Science.

    Google Scholar 

  12. P. D. Laird. Efficient unsupervised learning. In Proceedings of the 1988 Workshop on Computational Learning Theory. Morgan Kaufmann, San Mateo, California, August 1988.

    Google Scholar 

  13. S. E. Levinson, L. R. Rabiner, and M. M. Sondhi. An introduction to the application of the theory of probabilistic functions of a markov process to automatic speech recognition. The Bell System Technical Journal, 62 (4), April 1983.

    Google Scholar 

  14. A. Nakamura and N. Abe. Exact learning of linear combinations of monotone terms from function value queries. In Proceedings of the Fourth International Workshop on Algorithmic Learning Theory. Springer-Verlag, October 1994.

    Google Scholar 

  15. A. Nakamura, N. Abe, and J. Takeuchi. Efficient distribution-free population learning of simple concepts. In Proceedings of the Fifth International Workshop on Algorithmic Learning Theory. Springer-Verlag, October 1994.

    Google Scholar 

  16. D. Osherson, M. Stob, and S. Weinstein. Systems that Learn: An Introduction for Cognitive and Computer Scientists. MIT Press, 1986.

    Google Scholar 

  17. A. Paz. Introduction to Probabilistic Automata. Academic Press, 1971.

    Google Scholar 

  18. D. Pollard. Convergence of Stochastic Processes. Springer-Verlag, 1984.

    Google Scholar 

  19. J. Rissanen. Stochastic complexity and modeling. The Annals of Statistics, 14(3):1080–1100, 1986.

    Google Scholar 

  20. B. Rost and C. Sander. Prediction of protein secondary structure at better than 70% accuracy. J. Mol. Biol., 232:584–599, 1993.

    Google Scholar 

  21. Y. Sakakibara, M. Brown, R. C. Underwood, I. S. Mian, and D. Haussler. Stochastic context-free grammars for modeling RNA. In Proceedings of the 27th Hawaii International Conference on System Sciences, volume V, pages 284–293, 1994.

    Google Scholar 

  22. Y. Schabes. Stochastic lexicalized tree adjoining grammars. In Proceedings of COLING-92, pages 426–432, 1992.

    Google Scholar 

  23. R. H. Sloan. Computational Learning Theory: New Models and Algorithms. PhD thesis, MIT, 1989. Issued as MIT/LCS/TR-448.

    Google Scholar 

  24. C. Sander and R. Schneider. Database of homology-derived structures and the structural meaning of sequence alignment. Proteins: Struct. Funct. Genet., 9:56–68, 1991.

    Google Scholar 

  25. J. Vitter and J. Lin. Learning in parallel. Inform. Comput., pages 179–202, 1992.

    Google Scholar 

  26. K. Vijay-Shanker and A. K. Joshi. Some computational properties of tree adjoining grammars. In 23rd Meeting of A.C.L., 1985.

    Google Scholar 

  27. K. Yamanishi. A learning criterion for stochastic rules. Machine Learning, 9 (2/3), 1992. A special issue for COLT'90.

    Google Scholar 

  28. K. Yamanishi and A. Konagaya. Learning stochastic motifs from genetic sequences. In The Eighth International Workshop on Machine Learning, 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Setsuo Arikawa Klaus P. Jantke

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Abe, N. (1994). Towards realistic theories of learning. In: Arikawa, S., Jantke, K.P. (eds) Algorithmic Learning Theory. AII ALT 1994 1994. Lecture Notes in Computer Science, vol 872. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-58520-6_64

Download citation

  • DOI: https://doi.org/10.1007/3-540-58520-6_64

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-58520-6

  • Online ISBN: 978-3-540-49030-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics