Authors:
Mir Riyanul Islam
;
Mobyen Uddin Ahmed
and
Shahina Begum
Affiliation:
Artificial Intelligence and Intelligent Systems Research Group, School of Innovation Design and Engineering, Mälardalen University, Universitetsplan 1, 722 20 Västerås, Sweden
Keyword(s):
Counterfactuals, Explainability, Explainable Artificial Intelligence, Interpretability, Regression, Rule-Based Explanation, XGBoost.
Abstract:
Tree-ensemble models, such as Extreme Gradient Boosting (XGBoost), are renowned Machine Learning models which have higher prediction accuracy compared to traditional tree-based models. This higher accuracy, however, comes at the cost of reduced interpretability. Also, the decision path or prediction rule of XGBoost is not explicit like the tree-based models. This paper proposes the iXGB–interpretable XGBoost, an approach to improve the interpretability of XGBoost. iXGB approximates a set of rules from the internal structure of XGBoost and the characteristics of the data. In addition, iXGB generates a set of counterfactuals from the neighbourhood of the test instances to support the understanding of the end-users on their operational relevance. The performance of iXGB in generating rule sets is evaluated with experiments on real and benchmark datasets, which demonstrated reasonable interpretability. The evaluation result also supports the idea that the interpretability of XGBoost can
be improved without using surrogate methods.
(More)