Skip to main content

ILIME: Local and Global Interpretable Model-Agnostic Explainer of Black-Box Decision

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11695))

Abstract

Despite outperforming humans in different supervised learning tasks, complex machine learning models are criticised for their opacity which make them hard to trust especially when used in critical domains (e.g., healthcare, self-driving car). Understanding the reasons behind the decision of a machine learning model provides insights into the model and transforms the model from a non-interpretable model (black-box) to an interpretable one that can be understood by humans. In addition, such insights are important for identifying any bias or unfairness in the decision made by the model and ensure that the model works as expected. In this paper, we present ILIME, a novel technique that explains the prediction of any supervised learning-based prediction model by relying on an interpretation mechanism that is based on the most influencing instances for the prediction of the instance to be explained. We demonstrate the effectiveness of our approach by explaining different models on different datasets. Our experiments show that ILIME outperforms a state-of-the-art baseline technique, LIME, in terms of the quality of the explanation and the accuracy in mimicking the behaviour of the black-box model. In addition, we present a global attribution technique that aggregates the local explanations generated from ILIME into few global explanations that can mimic the behaviour of the black-box model globally in a simple way.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://ec.europa.eu/commission.

References

  1. Al-Mallah, M.H., et al.: Rationale and design of the Henry Ford Exercise Testing project (the FIT project). Clin. Cardiol. 37(8), 456–461 (2014)

    Article  Google Scholar 

  2. Alghamdi, M., Al-Mallah, M., Keteyian, S., Brawner, C., Ehrman, J., Sakr, S.: Predicting diabetes mellitus using SMOTE and ensemble machine learning approach: the Henry Ford ExercIse Testing (FIT) project. PLoS One 12(7), e0179805 (2017)

    Article  Google Scholar 

  3. Augasta, M.G., Kathirvalavakumar, T.: Reverse engineering the neural networks for rule extraction in classification problems. Neural Process. Lett. 35(2), 131–150 (2012)

    Article  Google Scholar 

  4. Caruana, R., et al.: Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In: KDD (2015)

    Google Scholar 

  5. Cook, R.D., Weisberg, S.: Characterizations of an empirical influence function for detecting influential cases in regression. Technometrics 22(4), 495–508 (1980)

    Article  MathSciNet  Google Scholar 

  6. Cook, R.D., Weisberg, S.: Residuals and Influence in Regression. Chapman and Hall, New York (1982)

    MATH  Google Scholar 

  7. Danks, D., London, A.J.: Regulating autonomous systems: beyond standards. IEEE Intell. Syst. 32(1), 88–91 (2017)

    Article  Google Scholar 

  8. Dua, D., Karra Taniskidou, E.: UCI machine learning repository (2017). http://archive.ics.uci.edu/ml

  9. Efron, B., Hastie, T., Johnstone, I., Tibshirani, R., et al.: Least angle regression. Ann. Stat. 32(2), 407–499 (2004)

    Article  MathSciNet  Google Scholar 

  10. Fisher, A., Rudin, C., Dominici, F.: Model class reliance: variable importance measures for any machine learning model class, from the rashomon perspective. arXiv preprint arXiv:1801.01489 (2018)

  11. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93 (2018)

    Article  Google Scholar 

  12. Hara, S., Hayashi, K.: Making tree ensembles interpretable. arXiv preprint arXiv:1606.05390 (2016)

  13. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  14. Kingston, J.K.C.: Artificial intelligence and legal liability. In: Bramer, M., Petridis, M. (eds.) Research and Development in Intelligent Systems XXXIII, pp. 269–279. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47175-4_20

    Chapter  Google Scholar 

  15. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. arXiv preprint arXiv:1703.04730 (2017)

  16. Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1675–1684. ACM (2016)

    Google Scholar 

  17. Lim, B.Y., Dey, A.K., Avrahami, D.: Why and why not explanations improve the intelligibility of context-aware intelligent systems. In: SIGCHI (2009)

    Google Scholar 

  18. Lowry, S., Macpherson, G.: A blot on the profession. Br. Med. J. (Clin. Res. Ed.) 296(6623), 657 (1988)

    Article  Google Scholar 

  19. Malioutov, D.M., Varshney, K.R., Emad, A., Dash, S.: Learning interpretable classification rules with boolean compressed sensing. In: Cerquitelli, T., Quercia, D., Pasquale, F. (eds.) Transparent Data Mining for Big and Small Data. SBD, vol. 11, pp. 95–121. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54024-5_5

    Chapter  Google Scholar 

  20. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: Explaining the predictions of any classifier. In: KDD (2016)

    Google Scholar 

  21. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  22. Ross, A.S., Hughes, M.C., Doshi-Velez, F.: Right for the right reasons: training differentiable models by constraining their explanations. arXiv preprint arXiv:1703.03717 (2017)

  23. Sakr, S., et al.: Using machine learning on cardiorespiratory fitness data for predicting hypertension: the henry Ford Exercise Testing (FIT) project. PLoS ONE 13(4), e0195344 (2018)

    Article  Google Scholar 

  24. Sakr, S., et al.: Comparison of machine learning techniques to predict all-cause mortality using fitness data: the henry Ford Exercise Testing (FIT) project. BMC Med. Inform. Decis. Mak. 17(1), 174 (2017)

    Article  Google Scholar 

  25. Shieh, G.S., Bai, Z., Tsai, W.Y.: Rank tests for independence–with a weighted contamination alternative. Statistica Sinica 10, 577–593 (2000)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgment

The work of Radwa Elshawi is funded by the European Regional Development Funds via the Mobilitas Plus programme (MOBJD341). The work of Sherif Sakr is funded by the European Regional Development Funds via the Mobilitas Plus programme (grant MOBTT75).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sherif Sakr .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

ElShawi, R., Sherif, Y., Al-Mallah, M., Sakr, S. (2019). ILIME: Local and Global Interpretable Model-Agnostic Explainer of Black-Box Decision. In: Welzer, T., Eder, J., Podgorelec, V., Kamišalić Latifić, A. (eds) Advances in Databases and Information Systems. ADBIS 2019. Lecture Notes in Computer Science(), vol 11695. Springer, Cham. https://doi.org/10.1007/978-3-030-28730-6_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-28730-6_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-28729-0

  • Online ISBN: 978-3-030-28730-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics