Abstract
Ensemble methods are popular learning methods that are usually able to increase the predictive accuracy of a classifier. On the other hand, this comes at the cost of interpretability, and insight in the decision process of an ensemble is hard to obtain. This is a major reason why ensemble methods have not been extensively used in the setting of inductive logic programming. In this paper we aim to overcome this issue of comprehensibility by learning a single first order interpretable model that approximates the first order ensemble. The new model is obtained by exploiting the class distributions predicted by the ensemble. These are employed to compute heuristics for deciding which tests are to be used in the new model. As such we obtain a model that is able to give insight in the decision process of the ensemble, while being more accurate than the single model directly learned on the data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 36, 105 (1999)
Berka, P.: Guide to the financial data set. In: Siebes, A., Berka, P. (eds.) The ECML/PKDD 2000 Discovery Challenge (2000)
Blockeel, H., De Raedt, L.: Top-down induction of first order logical decision trees. Artificial Intelligence 101(1-2), 285–297 (1998)
Blockeel, H., Dehaspe, L., Demoen, B., Janssens, G., Ramon, J., Vandecasteele, H.: Improving the efficiency of inductive logic programming through the use of query packs. Journal of Artificial Intelligence Research 16, 135–166 (2002)
Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)
Craven, M.W.: Extracting Comprehensible Models from Trained Neural Networks. PhD thesis, University of Wisconsin, Madison (2003)
de Castro Dutra, I., Page, D., Costa, V., Shavlik, J.: An empirical evalutation of bagging in inductive logic programming. In: Matwin, S., Sammut, C. (eds.) ILP 2002. LNCS (LNAI), vol. 2583, pp. 48–65. Springer, Heidelberg (2003)
Dietterich, T.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) MCS 2000. LNCS, vol. 1857, pp. 1–15. Springer, Heidelberg (2000)
Domingos, P.: Knowledge discovery via multiple models. Intelligent Data Analysis 2, 187–202 (1998)
Ferri, C., Hernández-Orallo, J., Ramírez-Quintana, M.: From Ensemble Methods to Comprehensible Models. In: Lange, S., Satoh, K., Smith, C.H. (eds.) DS 2002. LNCS, vol. 2534, pp. 165–177. Springer, Heidelberg (2002)
Hoche, S., Wrobel, S.: Relational learning using constrained confidence-rated boosting. In: Rouveirol, C., Sebag, M. (eds.) ILP 2001. LNCS (LNAI), vol. 2157, pp. 51–64. Springer, Heidelberg (2001)
Michalski, R.: Pattern Recognition as Rule-Guided Inductive Inference. IEEE Transactions on Pattern Analysis and Machine Intelligence 2, 349–361 (1980)
Quinlan, J.: Boosting first-order learning. In: Arikawa, S., Sharma, A.K. (eds.) ALT 1996. LNCS, vol. 1160, Springer, Heidelberg (1996)
Quinlan, J.R.: Induction of decision trees. Machine Learning 1, 81–106 (1986)
Quinlan, J.R.: C4.5: Programs for Machine Learning. In: Machine Learning. Morgan Kaufmann series, Morgan Kaufmann, San Francisco (1993)
Ridgeway, G., Madigan, D., Richardson, J., adn O’Kane, T.: Interpretable boosted naive bayes classification. In: Proc. of the 4th International Conference on Knowledge Discovery in Databases, pp. 101–104. AAAI Press, Menlo Park (1998)
Srinivasan, A., King, R., Muggleton, S., Sternberg, M.: Carcinogenesis predictions using ILP. In: Lavrač, N., Džeroski, S. (eds.) ILP 1997. LNCS (LNAI), vol. 1297, pp. 273–287. Springer, Heidelberg (1997)
Van Assche, A., Ramon, J., Blockeel, H.: Learning interpretable models from an ensemble in ILP. In: Proc. of the 16th International Conference on Inductive Logic Programming – short papers, pp. 210–212 (2006)
Van Assche, A., Vens, C., Blockeel, H., Džeroski, S.: First order random forests: Learning relational classifiers with complex aggregates. Machine Learning 64(1-3), 149–182 (2006)
Van Assche, A., Blockeel, H.: Seeing the forest through the trees: Learning an interpretable model from an ensemble. In: Kok, J.N., Koronacki, J., Lopez de Mantaras, R., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNCS (LNAI), vol. 4701, Springer, Heidelberg (2007)
Zhou, Z., Jiang, Y., Chen, S.: Extracting symbolic rules from trained neural network ensembles. AI Communications 16(1), 3–15 (2003)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Van Assche, A., Blockeel, H. (2008). Seeing the Forest Through the Trees. In: Blockeel, H., Ramon, J., Shavlik, J., Tadepalli, P. (eds) Inductive Logic Programming. ILP 2007. Lecture Notes in Computer Science(), vol 4894. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78469-2_26
Download citation
DOI: https://doi.org/10.1007/978-3-540-78469-2_26
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-78468-5
Online ISBN: 978-3-540-78469-2
eBook Packages: Computer ScienceComputer Science (R0)