Skip to main content

Evaluation of Post-hoc XAI Approaches Through Synthetic Tabular Data

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12117))

Abstract

Evaluating the explanations given by post-hoc XAI approaches on tabular data is a challenging prospect, since the subjective judgement of explanations of tabular relations is non trivial in contrast to e.g. the judgement of image heatmap explanations. In order to quantify XAI performance on categorical tabular data, where feature relationships can often be described by Boolean functions, we propose an evaluation setting through generation of synthetic datasets. To create gold standard explanations, we present a definition of feature relevance in Boolean functions. In the proposed setting we evaluate eight state-of-the-art XAI approaches and gain novel insights into XAI performance on categorical tabular data. We find that the investigated approaches often fail to faithfully explain even basic relationships within categorical data.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    http://www.dmir.uni-wuerzburg.de/projects/deepscan/xai-eval-data/.

References

  1. Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: A unified view of gradient-based attribution methods for deep neural networks. In: NIPS 2017 - Workshop on Interpreting, Explaining and Visualizing Deep Learning. ETH Zurich (2017)

    Google Scholar 

  2. Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: Gradient-based attribution methods. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 169–191. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_9

    Chapter  Google Scholar 

  3. Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10(7), e0130140 (2015)

    Article  Google Scholar 

  4. Castro, J., Gómez, D., Tejada, J.: Polynomial calculation of the shapley value based on sampling. Comput. Oper. Res. 36(5), 1726–1730 (2009)

    Article  MathSciNet  Google Scholar 

  5. Kindermans, P.J., et al.: Learning how to explain neural networks: Patternnet and patternattribution. In: International Conference on Learning Representations (2018)

    Google Scholar 

  6. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, pp. 4765–4774 (2017)

    Google Scholar 

  7. O’Donnell, R.: Analysis of Boolean Functions. Cambridge University Press, Cambridge (2014)

    Book  Google Scholar 

  8. Preiss, B.: Data Structures and Algorithms with Object-Oriented Design Patterns in Java (1999)

    Google Scholar 

  9. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. In: 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data mining, pp. 1135–1144. ACM (2016)

    Google Scholar 

  10. Ring, M., Schlr, D., Landes, D., Hotho, A.: Flow-based network traffic generation using generative adversarial networks. Comput. Secur. 82, 156–172 (2019)

    Article  Google Scholar 

  11. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206 (2019)

    Article  Google Scholar 

  12. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: 34th International Conference on Machine Learning, vol. 70, pp. 3145–3153. JMLR. org (2017)

    Google Scholar 

  13. Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A.: Not just a black box: Learning important features through propagating activation differences. arXiv preprint arXiv:1605.01713 (2016)

  14. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. In: Bengio, Y., LeCun, Y. (eds.) ICLR (Workshop Poster) (2014)

    Google Scholar 

  15. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: 34th International Conference on Machine Learning, vol. 70, pp. 3319–3328. JMLR. org (2017)

    Google Scholar 

Download references

Acknowledgement

The authors acknowledge the financial support by the Federal Ministry of Education and Research of Germany as part of the DeepScan project (01IS18045A).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Julian Tritscher , Daniel Schlr , Lena Hettinger or Andreas Hotho .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tritscher, J., Ring, M., Schlr, D., Hettinger, L., Hotho, A. (2020). Evaluation of Post-hoc XAI Approaches Through Synthetic Tabular Data. In: Helic, D., Leitner, G., Stettinger, M., Felfernig, A., Raś, Z.W. (eds) Foundations of Intelligent Systems. ISMIS 2020. Lecture Notes in Computer Science(), vol 12117. Springer, Cham. https://doi.org/10.1007/978-3-030-59491-6_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59491-6_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59490-9

  • Online ISBN: 978-3-030-59491-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics