Skip to main content

Learning the Bias of a Classifier in a GA-Based Inductive Learning Environment

  • Conference paper
Advances in Intelligent Computing (ICIC 2005)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3644))

Included in the following conference series:

  • 3449 Accesses

Abstract

We have explored a meta-learning approach to improve the prediction accuracy of a classification system. In the meta-learning approach, a meta-classifier that learns the bias of a classifier is obtained so that it can evaluate the prediction made by the classifier for a given example and thereby improve the overall performance of a classification system. The paper discusses our meta-learning approach in details and presents some empirical results that show the improvement we can achieve with the meta-learning approach in a GA-based inductive learning environment.

This work was supported by Ministry of Education and Human Resources Development through Embedded Software Open Education Resource Center (ESC) at Sangmyung University.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Doan, A., Domingos, P., Halevy, A.: Learning to Match the Schemas of Data Sources: A Multistrategy Approach. Machine Learning 50(3), 279–301 (2003)

    Article  MATH  Google Scholar 

  2. Major, R.L., Ragsdale, C.T.: An Aggregation Approach to the Classification Problem Using Multiple Prediction Experts. Information Processing and Management 36, 683–696 (2000)

    Article  Google Scholar 

  3. Ishibuchi, H., Nakashima, T., Morisawa, T.: Voting in Fuzzy Rule-based Systems for Pattern Classification Problems. Fuzzy Sets and Systems (1999)

    Google Scholar 

  4. Giraud-Carrier, C., Vilalta, R., Brazdil, P.: Introduction to the Special Issue on Meta-Learning. Machine Learning 54(3), 187–193 (2004)

    Article  Google Scholar 

  5. Esposito, F., Semerano, G., Fanizzi, N., Ferilli, S.: Multistrategy Theory Revision: Induction and Abduction in INTHELEX. Machine Learning 38, 133–156 (2000)

    Article  MATH  Google Scholar 

  6. Fan, D.W., Chan, P.K., Stolfo, S.J.: A Comparative Evaluation of Combiner and Stacked Generalization. In: IMLM 1996, pp. 40–46 (1996)

    Google Scholar 

  7. Eick, C.F., Kim, Y.-J., Secomandi, N., Toto, E.: DELVAUX - An Environment that Learns Bayesian Rule-sets with Genetic Algorithms. In: The Third World Congress on Expert Systems (1996)

    Google Scholar 

  8. Duda, R., Hart, P., Nilsson, J.: Subjective Bayesian methods for rule-based inference systems. In: Proc. National Computer Conference, pp. 1075–1082 (1976)

    Google Scholar 

  9. Meir, R., Rätsch, G.: An Introduction to Boosting and Leveraging. In: Advanced Lectures on Machine Learning. LNCS, pp. 119–184. Springer, Heidelberg (2003)

    Google Scholar 

  10. Rätsch, G., Warmuth, M.K.: Maximizing the Margin with Boosting. In: Kivinen, J., Sloan, R.H. (eds.) COLT 2002. LNCS (LNAI), vol. 2375, pp. 334–350. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  11. Dietterich, T.G.: An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, boosting, and randomization. Machine Learning 40(2), 139–157 (1999)

    Article  Google Scholar 

  12. Lugosi, G., Vayatis, N.: A Consistent Strategy for Boosting Algorithms. In: Kivinen, J., Sloan, R.H. (eds.) COLT 2002. LNCS (LNAI), vol. 2375, pp. 303–318. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  13. El-Yaniv, R., Derbeko, P., Meir, R.: Variance Optimized Bagging. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430, p. 60. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kim, Y., Hong, C. (2005). Learning the Bias of a Classifier in a GA-Based Inductive Learning Environment. In: Huang, DS., Zhang, XP., Huang, GB. (eds) Advances in Intelligent Computing. ICIC 2005. Lecture Notes in Computer Science, vol 3644. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11538059_96

Download citation

  • DOI: https://doi.org/10.1007/11538059_96

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28226-6

  • Online ISBN: 978-3-540-31902-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics