Skip to main content

Dynamical Selection of Learning Algorithms

  • Chapter
Learning from Data

Part of the book series: Lecture Notes in Statistics ((LNS,volume 112))

Abstract

Determining the conditions for which a given learning algorithm is appropriate is an open problem in machine learning. Methods for selecting a learning algorithm for a given domain have met with limited success. This paper proposes a new approach to predicting a given example’s class by locating it in the “example space” and then choosing the best learner(s) in that region of the example space to make predictions. The regions of the example space are defined by the prediction patterns of the learners being used. The learner(s) chosen for prediction are selected according to their past performance in that region. This dynamic approach to learning algorithm selection is compared to other methods for selecting from multiple learning algorithms. The approach is then extended to weight rather than select the algorithms according to their past performance in a given region. Both approaches are further evaluated on a set of ten domains and compared to several other meta-learning strategies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aha, D. W. (1992). Generalizing from case studies: A case study. Proceedings of the Ninth International Machine Learning Conference. (pp 1–10). San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  2. Breiman, L., Friedman, J.H., Olshen, R.A. & Stone, C.J. (1984). Classification and Regression Trees. Belmont, CA: Wadsworth.

    MATH  Google Scholar 

  3. Brodley, C. E. (1993). Addressing the Selective Superiority Problem: Automatic Algorithm/Model Class Selection. Proceedings of the Tenth International Machine Learning Conference (pp. 17–24 ). San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  4. Brodley, C. E. (1994). Recursive Automatic Bias Selection for Classifier Construction. To appear in the special issue of Machine Learning on “Bias Evaluation and Selection.” Kluwer Academic Publishers.

    Google Scholar 

  5. Buntine, W. (1989). Decision tree induction systems: a bayesian analysis. Uncertainty in Artificial Intelligence 3 (pp. 109–127 ). North-Holland, Amsterdam.

    Google Scholar 

  6. Cheeseman, P., Kelly, J., Self, M., Stutz, J., Taylor, W., & Freeman, D. (1988). AutoClass: A Bayesian classification system. Proceedings of the Fifth International Machine Learning Conference (pp. 54–64 ). Ann Arbor, MI: Morgan Kaufmann.

    Google Scholar 

  7. Clark, P., & Niblett, T. (1989). The CN2 Induction Algorithm. Machine Learning, 3, 261–284.

    Google Scholar 

  8. Cost, S. & Salzberg, S. (1993). A Weighted Nearest Neighbor Algorithm for Learning with Symbolic Features. Machine Learning, 10, 57–78

    Google Scholar 

  9. Kwok, S. W., & Carter, C. (1990). Multiple decision trees. Uncertainty in Artificial Intelligence 4 (pp. 327–335 ). North-Holland, Amsterdam.

    Google Scholar 

  10. Murthy, S., Kasif, S., Salzberg, S., & Beigel, R. (1993). OC1: Randomized induction of oblique decision trees. In Proceedings of AAAI-93 (pp. 322–327 ). Washington DC: AAAI Press.

    Google Scholar 

  11. Quinlan, R. (1993). C4.5 Programs for Machine Learning. San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  12. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning Interior Representation by Error Propagation. Parallel Distributed Processing, 1 318–362. Cambridge, MASS.: MIT Press.

    Google Scholar 

  13. Wolpert, D. H. (1992). Stacked Generalization. Neural Networks, 5, 241–259.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag New York, Inc.

About this chapter

Cite this chapter

Merz, C.J. (1996). Dynamical Selection of Learning Algorithms. In: Fisher, D., Lenz, HJ. (eds) Learning from Data. Lecture Notes in Statistics, vol 112. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-2404-4_27

Download citation

  • DOI: https://doi.org/10.1007/978-1-4612-2404-4_27

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-0-387-94736-5

  • Online ISBN: 978-1-4612-2404-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics