Skip to main content

Exploring Counterfactual Explanations for Classification and Regression Trees

  • Conference paper
  • First Online:

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1524))

Abstract

The problem of counterfactual explanations is that of minimally adjusting attributes in a source input instance so that it is classified as a target class under a given classifier. They answer practical questions of the type “what should my annual income be for my loan to be approved?”, for example. We focus on classification and regression trees, both axis-aligned and oblique (having hyperplane splits), and formulate the counterfactual explanation as an optimization problem. Although this problem is nonconvex and nondifferentiable, an exact solution can be computed very efficiently, even with high-dimensional feature vectors and with both continuous and categorical features. We also show how the counterfactual explanation formulation can answer a range of important practical questions, providing a way to query a trained tree and suggest possible actions to overturn its decision, and demonstrate it in several case studies. The results are particularly relevant for finance, medicine or legal applications, where interpretability and counterfactual explanations are particularly important.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Bella, A., Ferri, C., Hernández-Orallo, J., Ramírez-Quintana, M.J.: Using negotiable features for prescription problems. Computing 91, 135–168 (2011)

    Article  Google Scholar 

  2. Breiman, L.J., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Wadsworth, Belmont (1984)

    MATH  Google Scholar 

  3. Carreira-Perpiñán, M.Á.: The Tree Alternating Optimization (TAO) algorithm: a new way to learn decision trees and tree-based models (2021). arXIV

    Google Scholar 

  4. Carreira-Perpiñán, M.Á., Hada, S.S.: Counterfactual explanations for oblique decision trees: exact, efficient algorithms. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI 2021), 2–9 February 2021, pp. 6903–6911 (2021)

    Google Scholar 

  5. Carreira-Perpiñán, M.Á., Hada, S.S.: Counterfactual explanations for oblique decision trees: exact, efficient algorithms. arXiv:2103.01096 (2021)

  6. Carreira-Perpiñán, M.Á., Hada, S.S.: Inverse classification with logistic and softmax classifiers: efficient optimization (2021). arXIV

    Google Scholar 

  7. Carreira-Perpiñán, M.Á., Tavallali, P.: Alternating optimization of decision trees, with application to learning sparse oblique trees. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems (NEURIPS), vol. 31, pp. 1211–1221. MIT Press, Cambridge (2018)

    Google Scholar 

  8. Carreira-Perpiñán, M.Á., Zharmagambetov, A.: Ensembles of bagged TAO trees consistently improve over random forests, AdaBoost and gradient boosting. In: Proceedings of the 2020 ACM-IMS Foundations of Data Science Conference (FODS 2020), Seattle, WA, 19–20 October 2020, pp. 35–46(2020)

    Google Scholar 

  9. Cui, Z., Chen, W., He, Y., Chen, Y.: Optimal action extraction for random forests and boosted trees. In: Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (SIGKDD 2015), Sydney, Australia, 10–13 August 2015, pp. 179–188 (2015)

    Google Scholar 

  10. Dosovitskiy, A., Brox, T.: Inverting visual representations with convolutional networks. In: Proceedings of the 2016 IEEE Computer Society Conference Computer Vision and Pattern Recognition (CVPR’16), Las Vegas, NV, 26 June–1 July 2016 (2016)

    Google Scholar 

  11. Freitas, A.A.: Comprehensible classification models: a position paper. SIGKDD Explor. 15(1), 1–10 (2014)

    Article  Google Scholar 

  12. Gabidolla, M., Zharmagambetov, A., Carreira-Perpiñán, M.Á.: Improved multiclass adaboost using sparse oblique decision trees (2021), submitted

    Google Scholar 

  13. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), San Diego, CA, 7–9 May 2015 (2015)

    Google Scholar 

  14. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93 (2018)

    Google Scholar 

  15. Gurobi Optimization, LLC: Gurobi optimizer reference manual (2019)

    Google Scholar 

  16. Hada, S.S., Carreira-Perpiñán, M.Á.: Sampling the “inverse set” of a neuron: an approach to understanding neural nets. arXiv:1910.04857 (2019)

  17. Hada, S.S., Carreira-Perpiñán, M.Á.: Sampling the “inverse set” of a neuron. In: IEEE International Conference on Image Processing (ICIP 2021), Anchorage, AK, 9–12 September 2021 (2021)

    Google Scholar 

  18. Hada, S.S., Carreira-Perpiñán, M.Á., Zharmagambetov, A.: Sparse oblique decision trees: a tool to understand and manipulate neural net features. arXiv:2104.02922 (2021)

  19. Hada, S.S., Carreira-Perpiñán, M.Á., Zharmagambetov, A.: Understanding and manipulating neural net features using sparse oblique classification trees. In: IEEE International Conference on Image Processing (ICIP 2021), Anchorage, AK, 19–12 September 2021 (2021)

    Google Scholar 

  20. Idelbayev, Y., Zharmagambetov, A., Gabidolla, M., Carreira-Perpiñán, M.Á.: Faster neural net inference via forests of sparse oblique decision trees (2021). arXIV

    Google Scholar 

  21. Lipton, Z.C.: The mythos of model interpretability. Comm. ACM 81(10), 36–43 (2018)

    Article  Google Scholar 

  22. Mahendran, A., Vedaldi, A.: Visualizing deep convolutional neural networks using natural pre-images. Int. J. Comput. Vision 120(3), 233–255 (2016)

    Article  MathSciNet  Google Scholar 

  23. Martens, D., Provost, F.: Explaining data-driven document classifications. MIS Q. 38(1), 73–99 (2014)

    Article  Google Scholar 

  24. Russell, C.: Efficient search for diverse coherent explanations. In: Proceedings of ACM Conference Fairness, Accountability, and Transparency (FAT 2019), Atlanta, GA, 29–31 January 2019, pp. 20–28 (2019)

    Google Scholar 

  25. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. In: Proceedings of the 2nd International Conference Learning Representations (ICLR 2014), Banff, Canada, 14–16 April 2014 (2014)

    Google Scholar 

  26. Szegedy, C., et al.: Intriguing properties of neural networks. In: Proceedings of the 2nd International Conference on Learning Representations (ICLR 2014), Banff, Canada, 14–16 April 2014 (2014)

    Google Scholar 

  27. Ustun, B., Spangher, A., Liu, Y.: Actionable recourse in linear classification. In: Proceedings of ACM Conference Fairness, Accountability, and Transparency (FAT 2019), Atlanta, GA, 29–31 January 2019, pp. 10–19 (2019)

    Google Scholar 

  28. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harvard J. Law Technol. 31(2), 841–887 (2018)

    Google Scholar 

  29. Wu, X., Kumar, V. (eds.): The Top Ten Algorithms in Data Mining. Chapman & Hall/CRC Data Mining and Knowledge Discovery Series, CRC Publishers, Boca Raton (2009)

    Google Scholar 

  30. Yang, Q., Yin, J., Ling, C.X., Pan, R.: Extracting actionable knowledge from decision trees. IEEE Trans. Knowl. Data Eng. 18(1), 43–56 (2006)

    Article  Google Scholar 

  31. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Proceedings of 13th European Conference Computer Vision (ECCV’14), Zürich, Switzerland, 6–12 September 2014, pp. 818–833 (2014)

    Google Scholar 

  32. Zhang, C., Liu, C., Zhang, X., Almpanidis, G.: An up-to-date comparison of state-of-the-art classification algorithms. Expert Syst. Appl. 82, 128–150 (2017)

    Article  Google Scholar 

  33. Zharmagambetov, A., Carreira-Perpiñán, M.Á.: Smaller, more accurate regression forests using tree alternating optimization. In: Daumé III, H., Singh, A. (eds.) Proceedings of the 37th International Conference on Machine Learning (ICML 2020), 13–18 July 2020, pp. 11398–11408 (2020)

    Google Scholar 

  34. Zharmagambetov, A., Carreira-Perpiñán, M.Á.: Learning a tree of neural nets. In: Proceedings of the IEEE International Conference Acoustics, Speech and Signal Processing (ICASSP’21), , Toronto, Canada, 6–11 June 2021, pp. 3140–3144 (2021)

    Google Scholar 

  35. Zharmagambetov, A., Gabidolla, M., Carreira-Perpiñán, M.Á.: Improved boosted regression forests through non-greedy tree optimization. In: International Joint Conference Neural Networks (IJCNN’21), Virtual Event, 18–22 July 2021 (2021)

    Google Scholar 

  36. Zharmagambetov, A., Gabidolla, M., Carreira-Perpiñán, M.Á.: Improved multiclass AdaBoost for image classification: the role of tree optimization. In: IEEE International Conference on Image Processing (ICIP 2021), Anchorage, AK, 19–22 September 2021 (2021)

    Google Scholar 

  37. Zharmagambetov, A., Gabidolla, M., Carreira-Perpiñán, M.Á.: Softmax tree: an accurate, fast classifier when the number of classes is large (2021), submitted

    Google Scholar 

  38. Zharmagambetov, A., Hada, S.S., Gabidolla, M., Carreira-Perpiñán, M.Á.: Non-greedy algorithms for decision tree optimization: an experimental comparison. In: International Joint Conference on Neural Networks (IJCNN’21), Virtual event, 18–22 July 2021 (2021)

    Google Scholar 

Download references

Acknowledgments

Work partially supported by NSF award IIS–2007147.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Suryabhan Singh Hada .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hada, S.S., Carreira-Perpiñán, M.Á. (2021). Exploring Counterfactual Explanations for Classification and Regression Trees. In: Kamp, M., et al. Machine Learning and Principles and Practice of Knowledge Discovery in Databases. ECML PKDD 2021. Communications in Computer and Information Science, vol 1524. Springer, Cham. https://doi.org/10.1007/978-3-030-93736-2_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-93736-2_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-93735-5

  • Online ISBN: 978-3-030-93736-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics