Skip to main content

Learning and revising theories in noisy domains

  • Session 9
  • Conference paper
  • First Online:
Book cover Algorithmic Learning Theory (ALT 1997)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1316))

Included in the following conference series:

Abstract

This paper describes an approach to learning from noisy examples with an approximate theory. The approach includes a theory preference criterion and an overfitting avoidance strategy. The theory preference criterion is a coding scheme which extends the minimum description length (MDL) principle by unifying model complexity and exception cost. Model complexity is the encoding cost for an algorithm to obtain a logic program; exception cost is the encoding length of the training examples misclassified by a theory. When the system learns from the remainder of the training set, it adopts a kind of overfitting avoidance technique, induces thus more accurate clauses. Accounting for the above cases, our approach appears to be more accurate and efficient compared with existing approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. Angluin and P. Laird. Learning from noisy examples. Machine Learning, 2:343–370, 1988.

    Google Scholar 

  2. N. Lavrac and S. Dzeroski. Inductive learning of relations from noisy examples. In S.H. Muggleton, editor, Inductive Logic Programming, London, 1992. Academic Press.

    Google Scholar 

  3. C. R. Mofizur and M. Numao. Learning simple recursive concepts by discovering missing examples. In PRICAI'96: Topics in Artificial Intelligence, pages 360–371. Springer Verlag, 1996.

    Google Scholar 

  4. S. Muggleton. Inductive Logic Programming. Academic Press, 1992.

    Google Scholar 

  5. S. Muggleton. Inverse entailment and progol. New Generation Computing, 13:145–186, 1995.

    Google Scholar 

  6. S.H. Muggleton, A. Srinivasan, and M. Bain. Compression, significance and accuracy. In Proc. Ninth International Conference on Machine Learning, pages 338–347, San Mateo, CA, 1992. Morgan Kaufmann.

    Google Scholar 

  7. M. Pazzani and D. Kibler. The utility of knowledge in inductive learning. Machine Learning, 9(1):57–94, 1992.

    Google Scholar 

  8. M.J. Pazzani, C.A. Brunk, and G. Silverstein. An information-based approach to integrating empirical and explanation-based learning. In S. Muggleton, editor, Inductive Logic Programming, London, 1992. Academic Press.

    Google Scholar 

  9. J. R. Quinlan. The minimum description length principle and categorical theories. In Proc. of 11th International Conference on Machine Learning, pages 233–241. Morgan Kaufmann, 1994.

    Google Scholar 

  10. J.R. Quinlan. Learning logical definitions from relations. Machine Learning, 5(3):239–266, 1990.

    Google Scholar 

  11. J.R. Quinlan and R.D. Rivest. Inferring decision trees using the minimum description length principle. Information and Computation, 80:227–248, 1989.

    Article  Google Scholar 

  12. B. L. Richards and R. J. Mooney. Automated refinement of first-order Horn-clause domain theories. Machine Learning, 19:95–131, 1995.

    Google Scholar 

  13. J. Rissanen. A universal prior for integers and estimation by minimum description length. Annals of Statistics, 11:416–431, 1983.

    Google Scholar 

  14. J. Rissanen. Stochastic complexity and modeling. The Annals of Statistics, 14:1080–1100, 1986.

    Google Scholar 

  15. C. Schaffer. Overfitting avoidance as bias. Machine Learning, 10:153–178, 1993.

    Google Scholar 

  16. S. Tangkitvanich and M. Shimura. Learning from an approximate theory and noisy examples. In AAAI 93, pages 466–471. AAAI press/The MIT Press, 1993.

    Google Scholar 

  17. J. Wogulis and M. J. Pazzani. A methodology for evaluating theory revision systems: Results with Audrey II. In Proc. of 13th International Conference on Artificial Intelligence, pages 1128–1134, Chambery, France, 1993.

    Google Scholar 

  18. X. Zhang and M. Numao. Efficient multiple predicate learner based on fast failure mechanism. In PRICAI'96: Topics in Artificial Intelligence, pages 35–46. Springer Verlag, 1996.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Ming Li Akira Maruoka

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Rang, X., Numao, M. (1997). Learning and revising theories in noisy domains. In: Li, M., Maruoka, A. (eds) Algorithmic Learning Theory. ALT 1997. Lecture Notes in Computer Science, vol 1316. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-63577-7_53

Download citation

  • DOI: https://doi.org/10.1007/3-540-63577-7_53

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63577-2

  • Online ISBN: 978-3-540-69602-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics