Skip to main content

Development of Methods How to Avoid the Overfitting-Effect within the GeLog-System

  • Conference paper
Knowledge-Based Intelligent Information and Engineering Systems (KES 2003)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2774))

  • 1249 Accesses

Abstract

This article examines the methods how to avoid an overfitting-effect within GeLog-systems. This effect can be observed in nearly all systems of inductive concept learning, if due to false classification of examples false, especially too specific theories, are learned. There are a number or procedures, how to counter the effects of the overfitting-effect or to avoid it. This article develops criteria for the selection of those procedures. In this context, the integrability into the GeLog-system, a system of genetic inductive logic programming, is of great importance. Finally, a filter procedure, based on the correlation heuristic, which is also used for top-down-pruning, is selected, as it promised the possible application to a relatively huge amount of problems. After that, the efficiency of the methods will be proven with the help of systematic experiments.

This scientific work was supported Habilitation Fellowship of the Bavarian Government 1999 and the German Academic Exchange Service (DAAD).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aha, D.W.: Incremental constructive induction: An instance-based approach. In: Proceedings of the 8th International Workshop on Machine Learning, Evanston, ILL, pp. 117–121. Morgan Kaufmann, San Francisco (1991)

    Google Scholar 

  2. Breimann, L., Friedmann, J., Olshen, R., Stone, C.: Classification and Regression Trees. Wadsworth & Brooks, Pacific Grove (1984)

    Google Scholar 

  3. Brunk, C., Pazzani, M.: An investigation of noise-tolerant relational concept learning algorithms. In: Proceedings of the 8th International Workshop on Machine Learning (1991)

    Google Scholar 

  4. Chaffer, C.: Overfitting avoidance as bias. In: Machine Learning: Proceedings of the Ninth International Conference, San Francisco, pp. 153–178. Morgan Kaufmann, San Francisco (1992)

    Google Scholar 

  5. Fürnkranz, J.: Efficient Pruning Methods for Relational Learning. PhD thesis, Technical University of Vienna (1994)

    Google Scholar 

  6. John, G., Kohavi, R., Pfleger, R.: Irrelevant features and the subset selection problem. In: Machine Learning: Proceedings of the 11th International Conference, pp. 121–129. Morgan Kaufmann Publishers, San Francisco (1994)

    Google Scholar 

  7. Kókai, G.: GeLog—A System Combining Genetic Algorithm with Inductive Logic Programming. In: Reusch, B. (ed.) Fuzzy Days 2001. LNCS, vol. 2206, pp. 326–345. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  8. Matheus, C.J.: Adding domain knowledge to sbl through feature construction. In: Proceedings of the 8th National Conference on Artificial Intelligence, Boston, MA, pp. 803–808. AAAI Press, Menlo Park (1990)

    Google Scholar 

  9. Mingers, J.: An empirical comparison of pruning methods for decision tree induction. Machine Learning 4(2), 227–243 (1989)

    Article  Google Scholar 

  10. Mitchell, T.: Learning sets of rules. Machine Learning 10, 274–305 (1997)

    Google Scholar 

  11. Niblett, T., Bratko, I.: Learning decision rules in noisy domains. In: Bramer, M. (ed.) Research and Development in Expert Systems III, Cambridge, pp. 24–25. Cambridge University Press, Cambridge (1986)

    Google Scholar 

  12. Pagallo, G., Haussler, D.: Boolean feature discovery in empirical learning. Machine Learning 5(1), 71–99 (1990)

    Article  Google Scholar 

  13. Quinlan, J.: Simplifying decision trees. International Journal of Man-Machine Studies 27(3), 221–234 (1987)

    Article  Google Scholar 

  14. Wolpert, D.H.: On overfitting avoidance as bias. Technical report, Technical Report SFI TR 92-03-5001Th e Santa Fe Institute, Santa Fe (1982)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kókai, G. (2003). Development of Methods How to Avoid the Overfitting-Effect within the GeLog-System. In: Palade, V., Howlett, R.J., Jain, L. (eds) Knowledge-Based Intelligent Information and Engineering Systems. KES 2003. Lecture Notes in Computer Science(), vol 2774. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45226-3_131

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-45226-3_131

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40804-8

  • Online ISBN: 978-3-540-45226-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics