Skip to main content

Noise-Resistant Incremental Relational Learning Using Possible Worlds

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2583))

Abstract

Incremental learning from noisy data is a dificult task and has received very little attention in the field of Inductive Logic Programming. This paper outlines an approach to noisy incremental learning based on a possible worlds model and its implementation in NILE. Several issues relating to the use of this model are addressed. Empirical results are shown for an existing batch domain and also for an interactive learning task.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. K. Ali and M. Pazzani. HYDRA: A noise-tolerant relational concept learning algorithm. In R. Bajscy, editor, proceedings of the 13th International Joint Conference on Artificial Intelligence, pages 1064–1071. Morgan Kaufmann, 1993.

    Google Scholar 

  2. E. Crawford, J. Kay, and E. McCreath. Automatic induction of rules for e-mail classification. In Proc. of the sixth Aust. Document Computing Symposium, 2001.

    Google Scholar 

  3. S. Dzeroski. Handling imperfect data in inductive logic programming. In Proceedings of the fourth Scandinavian Conference on Artificial Intelligence, pages 111–125. IOS Press, 1993.

    Google Scholar 

  4. J. Fürnkranz. Avoiding noise fitting in a FOIL-like learning algorithm. In F. Bergadano, L. De Raedt, S. Matwin, and S. Muggleton, editors, Proc. of the IJCAI-93 Workshop on Inductive Logic Programming. Morgan-Kaufmann, 1993.

    Google Scholar 

  5. J. Fürnkranz. Pruning algorithms for rule learning. Machine Learning, 27(2):139–171, 1997.

    Article  Google Scholar 

  6. W. Iba, J. Wogulis, and P. Langley. Trading off simplicity and coverage in incremental concept learning. In Proceedings of the fifth International Conference on Machine Learning, pages 73–79, 1988.

    Google Scholar 

  7. P. Langley. Order effects in incremental learning. In P. Reimann and H. Spada, editors, Learning in humans and machines: Towards an interdisciplinary learning science. Elsevier, 1995.

    Google Scholar 

  8. S. Muggleton. Inductive logic programming. In S. Muggleton, editor, Inductive Logic Programming, pages 3–27. Academic Press, London, 1992.

    Google Scholar 

  9. S. Muggleton. Inverse entailment and Progol. New Generation Computing, 13(3–4):245–286, 1995.

    Article  Google Scholar 

  10. S. Muggleton and W. Buntine. Machine invention of first-order predicates by inverting resolution. In Proceedings of the 5th International Conference on Machine Learning, pages 339–352. Morgan Kaufmann, 1988.

    Google Scholar 

  11. S. Muggleton and C. Feng. Efficient induction of logic programs. In Proceedings of the First Conference on Algorithmic Learning Theory. Ohmsha, 1990.

    Google Scholar 

  12. S.-H. Nienhuys-Cheng and R. de Wolf. Foundations of Inductive Logic Programming, volume 1228 of LNAI. Springer-Verlag, 1997.

    Google Scholar 

  13. R. Quinlan. Learning logical definitions from relations. Machine Learning, 5:239–266, 1990.

    Google Scholar 

  14. C. Sammut and R Banerji. Learning concepts by asking questions. In R. Michalski, J. Carbonell, and T. Mitchell, editors, Machine Learning: An Artificial Intelligence Approach, Vol 2., pages 167–192. Morgan Kaufmann, 1986.

    Google Scholar 

  15. J. Schlimmer. Incremental adjustment of representations for learning. In Proc. of the fourth Int. Workshop on Machine Learning,. Morgan Kaufmann, 1987.

    Google Scholar 

  16. J. Schlimmer and D. Fisher. A case study of incremental concept induction. In Proceedings of the fifth National Conference on Artificial Intelligence, pages 496–501. Morgan Kaufmann, 1986.

    Google Scholar 

  17. E. Shapiro. An algorithm that infers theories from facts. In A. Drinan, editor, Proceedings of the seventh International Joint Conference on Artificial Intelligence, pages 446–451, Los Altos, CA, 1981. Morgan Kaufmann.

    Google Scholar 

  18. K. Taylor. Autonomous Learning by Incremental Induction and Revision. PhD thesis, Australian National University, 1996.

    Google Scholar 

  19. L. Torgo. Controlled redundancy in incremental rule learning. In P. Bradzil, editor, Proceedings of the European Conference on Machine Learning, volume 667 of Lecture Notes in AI, pages 185–195, Berlin, 1993. Springer-Verlag.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Westendorp, J. (2003). Noise-Resistant Incremental Relational Learning Using Possible Worlds. In: Matwin, S., Sammut, C. (eds) Inductive Logic Programming. ILP 2002. Lecture Notes in Computer Science(), vol 2583. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-36468-4_21

Download citation

  • DOI: https://doi.org/10.1007/3-540-36468-4_21

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-00567-4

  • Online ISBN: 978-3-540-36468-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics