Skip to main content

Rashomon Effect and Consistency in Explainable Artificial Intelligence (XAI)

  • Conference paper
  • First Online:
Proceedings of the Future Technologies Conference (FTC) 2022, Volume 1 (FTC 2022 2022)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 559))

Included in the following conference series:

Abstract

The consistency of the explainability of artificial intelligence (XAI), especially with regard to the Rashomon effect, is in the focus of the here presented work. Rashomon effect has been named the phenomenon of receiving different machine learning (ML) explanations when employing different models to describe the same data. On the basis of concrete examples, cases of Rashomon effect will be visually demonstrated and discussed to underline the difficulty to practically produce definite and unambiguous machine learning explanations and predictions. Artificial intelligence (AI) presently undergoes a so-called replication and reproducibility crisis which hinders models and techniques from being properly assessed for robustness, fairness, and safety. Studying the Rashomon effect is important for understanding the causes of the unintended variability of results which originate from-* within the models and the XAI methods themselves.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Breiman, L. : Statistical modeling: the two cultures. Stat. Sci. 16(3), 199–215, (2001). https://www.jstor.org/stable/2676681

  2. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016). Scikit-Learn California Housing dataset. http://scikit-learn.org/stable/datasets/real_world.html#california-housing-dataset. Accessed Apr 2022. https://doi.org/10.1145/2939672.2939785

  3. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016). https://doi.org/10.1145/2939672.2939785

  4. Covert, I.: Understanding and improving KernelSHAP. Blog by Ian Covert (2020). https://iancovert.com/blog/kernelshap/. Accessed Apr 2022

  5. D’Amour, A.: Revisiting Rashomon: a comment on “the two cultures”. Observational Stud. 7(1) (2021). https://doi.org/10.1353/obs.2021.0022

  6. Dressel, J., Farid, H.: The accuracy, fairness, and limits of predicting recidivism. Sci. Ad. 4(1), eaao5580 (2018). https://doi.org/10.1126/sciadv.aao5580

  7. Fisher, A., Rudin, C., Dominici, F.: All models are wrong, but many are useful: learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res. 20(177), 1–81 (2019). http://jmlr.org/papers/v20/18-760.html

  8. Fan, F.L., et al.: On interpretability of artificial neural networks: a survey. IEEE Trans. Radiat. Plasma Med. Sci. 5(6), 741–760 (2021). https://doi.org/10.1109/TRPMS.2021.3066428

    Article  Google Scholar 

  9. Gerber E.: A new perspective on shapley values, part II: the Naïve Shapley method. Blog by Edden Gerber (2020). https://edden-gerber.github.io/shapley-part-2/. Accessed Apr 2022

  10. Gibney, E.: This AI researcher is trying to ward off a reproducibility crisis. Interview Joelle Pineau. Nat. 577, 14 (2020). https://doi.org/10.1038/d41586-019-03895-5

    Article  Google Scholar 

  11. Jia, E.: Explaining explanations and perturbing perturbations, Bachelor’s thesis, Harvard College (2020). https://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37364690

  12. Koehrsen, W.: Thoughts on the two cultures of statistical modeling. Towards Data Sci. (2019). https://towardsdatascience.com/thoughts-on-the-two-cultures-of-statistical-modeling-72d75a9e06c2. Accessed Apr 2022

  13. Kuo, C.: Explain any models with the SHAP values - use the Kernelexplainer. Towards Data Sci. (2019). https://towardsdatascience.com/explain-any-models-with-the-shap-values-use-the-kernelexplainer-79de9464897a. Accessed Apr 2022

  14. Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems 30, pp. 4765–4774 (2017). https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html

  15. Lundberg, S.M., et al.: From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2, 56–67 (2020). https://doi.org/10.1038/s42256-019-0138-9

    Article  Google Scholar 

  16. Marx, C.T., Calmon, F., Ustun, B.: Predictive multiplicity in classification. In: ICML (International Conference on Machine Learning), Proceedings of Machine Learning Research, vol. 119, pp. 6765–6774 (2020). https://proceedings.mlr.press/v119/marx20a.html

  17. Merrick, L., Taly, A.: The explanation game: explaining machine learning models using shapley values. In: Holzinger, A., et al. (eds.) Machine Learning and Knowledge Extraction, vol. 12279, pp. 17–38 (2020). https://doi.org/10.1007/978-3-030-57321-8_2

  18. Mohan, A.: Kernel SHAP. Blog by Mohan, A. (2020). https://www.telesens.co/2020/09/17/kernel-shap/. Accessed Apr 2022

  19. Molnar, C.: Interpretable machine learning. Free HTML version (2022). https://christophm.github.io/interpretable-ml-book/

  20. Villa, J., Yoav Zimmerman, Y.: Reproducibility in ML: why it matters and how to achieve it. Determined AI (2018). https://www.determined.ai/blog/reproducibility-in-ml. Accessed Apr 2022

  21. Warden, P.: The machine learning reproducibility crisis. Domino Data Lab (2018). https://blog.dominodatalab.com/machine-learning-reproducibility-crisis. Accessed Apr 2022

  22. Zafar, M.R., Khan, N.: Deterministic local interpretable model-agnostic explanations for stable explainability. Mach. Learn. Knowl. Extr. 3(3), 525–541 (2021). https://doi.org/10.3390/make3030027

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anastasia-M. Leventi-Peetz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Leventi-Peetz, AM., Weber, K. (2023). Rashomon Effect and Consistency in Explainable Artificial Intelligence (XAI). In: Arai, K. (eds) Proceedings of the Future Technologies Conference (FTC) 2022, Volume 1. FTC 2022 2022. Lecture Notes in Networks and Systems, vol 559. Springer, Cham. https://doi.org/10.1007/978-3-031-18461-1_52

Download citation

Publish with us

Policies and ethics