Abstract
A linear model tree is a decision tree with a linear functional model in each leaf. In previous work we demonstrated that such trees can be learnt incrementally, and can form good models of non-linear dynamic environments. In this paper we introduce a new incremental node splitting criteria that is significantly faster than both our previous algorithm and other non-parametric incremental learning techniques, and in addition scales better with dimensionality. Empirical results in three domains ranging from a simple benchmark test function to a complex ten dimensional flight simulator show that in all cases the algorithm converges to a good final approximation, although the improved performance comes at the cost of slower initial learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Slotine, J., Li, W.: Applied nonlinear control. Prentice-Hall, Englewood Cliffs (1991)
Schaal, S., Atkeson, C.: Constructive incremental learning from only local information. Neural Computation 10, 2047–2084 (1998)
Potts, D.: Incremental learning of linear model trees. In: Proceedings of the 21st International Conference on Machine Learning (2004)
Frank, E., Wang, Y., Inglis, S., Holmes, G., Witten, I.: Using model trees for classification. Machine Learning 32, 63–76 (1998)
Quinlan, J.: Combining instance-based and model-based learning. In: Proceedings of the 10th International Conference on Machine Learning, pp. 236–243 (1993)
Karalic, A.: Employing linear regression in regression tree leaves. In: Proceedings of the 10th European Conference on Artificial Intelligence, pp. 440–441 (1992)
Malerba, D., Appice, A., Bellino, A., Ceci, M., Pallotta, D.: Stepwise induction of model trees. In: Esposito, F. (ed.) AI*IA 2001. LNCS (LNAI), vol. 2175, p. 20. Springer, Heidelberg (2001)
Dobra, A., Gehrke, J.: SECRET: A scalable linear regression tree algorithm. In: Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2002)
Li, K., Lue, H., Chen, C.: Interactive tree-structured regression via principal Hessian directions. Journal of the American Statistical Association 95, 547–560 (2000)
Utgoff, P., Berkman, N., Clouse, J.: Decision tree induction based on efficient tree restructuring. Machine Learning 29, 5–44 (1997)
Munos, R., Moore, A.: Variable resolution discretization in optimal control. Machine Learning 49, 291–323 (2002)
Chaudhuri, P., Huang, M., Loh, W., Yao, R.: Piecewise-polynomial regression trees. Statistica Sinica 4, 143–167 (1994)
Loh, W.: Regression trees with unbiased variable selection and interaction detection. Statistica Sinica 12, 361–386 (2002)
Vijayakumar, S., Schaal, S.: Locally weighted projection regression: Incremental real time learning in high dimensional space. In: Proceedings of the 17th International Conference on Machine Learning, pp. 1079–1086 (2000)
Isaac, A., Sammut, C.: Goal-directed learning to fly. In: Proceedings of the 20th International Conference of Machine Learning, pp. 258–265 (2003)
Nakanishi, J., Farrell, J., Schaal, S.: A locally weighted learning composite adaptive controller with structure adaptation. In: IEEE International Conference on Intelligent Robots and Systems (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Potts, D. (2004). Fast Incremental Learning of Linear Model Trees. In: Zhang, C., W. Guesgen, H., Yeap, WK. (eds) PRICAI 2004: Trends in Artificial Intelligence. PRICAI 2004. Lecture Notes in Computer Science(), vol 3157. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-28633-2_25
Download citation
DOI: https://doi.org/10.1007/978-3-540-28633-2_25
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-22817-2
Online ISBN: 978-3-540-28633-2
eBook Packages: Springer Book Archive