Skip to main content

Advertisement

Log in

Statistical guidance in symbolic learning

  • Published:
Annals of Mathematics and Artificial Intelligence Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

Concept learning methods of Artificial Intelligence (AI) are usefully guided by statistical measures of concept quality. We review the application of statistical measures intutored methods oflearning from examples, describe the recent application of these measures toconceptual clustering, and propose statistical applications inexplanation-based learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. R. Duda and P. Hart,Pattern Classification and Scene Analysis (Wiley, 1973).

  2. T. Dietterich and R. Michalski, Inductive learning of structural descriptions, Artificial Intelligence 16 (1981).

  3. R. Michalski and R. Stepp, Automated construction of taxonomies: Conceptual clustering vs. numerical taxonomy, IEEE Trans. PAMI 5 (1983) 396.

    Google Scholar 

  4. B. Everitt,Cluster Analysis (Heinemann, 1974).

  5. C. Romesburg,Cluster Analysis for Researchers (Wadsworth, 1984).

  6. J. Gennari, A survey of clustering methods, Technical Report, Dept. Computer Science, University of California, Irvine.

  7. D. Fisher, Noise-tolerant conceptual clustering, in:Proc. Int. Joint Conf. on Artificial Intellligence (Morgan Kaufmann, Detroit, 1989).

    Google Scholar 

  8. J.R. Quinlan, Induction of decision trees, Machine Learning 1 (1986) 81.

    Google Scholar 

  9. L. Breiman, J. Friedman, R. Olshen and C. Stone,Classification and Regression Trees (Wadsworth, 1984).

  10. J.R. Quinlan, Simplifying decision trees, Man-Machine Studies 27 (1987) 221.

    Google Scholar 

  11. J. Mingers, An empirical comparison of selection measures for decision-tree induction, Machine Learning 3 (1989) 319.

    Google Scholar 

  12. P. Clark and T. Niblett, The CN2 induction algorithm, Machine Learning 3 (1989) 261.

    Google Scholar 

  13. G. Pagallo and D. Haussler, Two algorithms that learn DNF by discovering relevant features, in;Proc. Int. Workshop on Machine Learning, ed. A. Segré (Morgan Kaufmann, Ithaca, NY, 1989) p. 119.

    Google Scholar 

  14. P. Chan, Inductive learning with BCT, in:Proc. Int. Workshop on Machine Learning, ed. A. Segré (Morgan Kaufmann, Ithaca, NY, 1989) p. 104.

    Google Scholar 

  15. M. Gluck and J. Corter, Information, uncertainty, and the utility of categories, in:Proc. 7th Conf. on Cognitive Science Society (Lawrence Erlbaum, Irvine, CA, 1985) p. 283.

    Google Scholar 

  16. P. Cheeseman, J. Kelly, M. Self, J. Stutz, W. Taylor and D. Freeman, AutoClass: a Bayesian classification system, in:Proc. 5th Int. Conf. on Machine Learning (Morgan Kaufmann, Ann Arbor, MI 1988) p. 54.

    Google Scholar 

  17. T. Mitchell, Generalization as search, Artificial Intelligence 18 (1982) 203.

    Google Scholar 

  18. D. Aha and D. Kibler, Noise-tolerant instance-based learning algorithms, in:Proc. Int. Joint Conf. on Artificial Intelligence (Morgan Kaufmann, Detroit, 1989).

    Google Scholar 

  19. J. Schlimmer and D. Fisher, A case study of incremental concept induction, in:Proc. 5th National Conf. on Artificial Intelligence (Morgan Kaufmann, Philadelphia, 1986) p. 496.

    Google Scholar 

  20. P. Utgoff, Perception trees: A case study in hybrid concept representations, in:Proc. 7th National Conf. on Artificial Intelligence (Morgan Kaufmann, St. Paul, 1988) p. 601.

    Google Scholar 

  21. T. Mitchell, R. Keller and S. Kedar-Cabelli, Explanation-based learning: A unifying view, Machine Learning 1 (1986) 47.

    Google Scholar 

  22. G. DeJong and R. Mooney, Explanation-based learning: An alternative view, Machine Learning 1 (1986) 145.

    Google Scholar 

  23. S. Minton, Qualitative results concerning the utility of explanation-based learning, in:Proc. 7th National Conf. on Artificial Intelligence (Morgan Kaufmann, St. Paul, 1988) p. 564.

    Google Scholar 

  24. N. Flann and T. Dietterich, A study of explanation-based methods for inductive learning, Technical Report 88-30-11, Dept. Comp. Sci. Oregon St. University (1988) (also Machine Learning, in press).

  25. J. Yoo and D. Fisher, Conceptual clustering of explanations, in:Proc. Int. Workshop on Machine Learning, ed. A. Segré (Morgan Kaufmann, Ithaca, NY, 1989) p. 8.

    Google Scholar 

  26. S. Markovitch and P. Scott, Information filters and their implementation in the syllog system, in:Proc. Int. Workshop on Machine Learning, ed. A. Segré (Morgan Kaufmann, Ithaca, NY, 1989) p. 404.

    Google Scholar 

  27. O. Etzioni, Hypothesis filtering: A practical approach to reliable learning, in:Proc. 5th Int. Conf. on Machine Learning (Morgan Kaufmann, Ann Arbor, MI 1988) p. 416.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

This work was supported by a Vanderbilt Research Council Grant.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fisher, D.H., Chan, P.K. Statistical guidance in symbolic learning. Ann Math Artif Intell 2, 135–147 (1990). https://doi.org/10.1007/BF01531002

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01531002

Keywords