Skip to main content

Guided-LORE: Improving LORE with a Focused Search of Neighbours

  • Conference paper
  • First Online:
  • 906 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12641))

Abstract

In the last years many Machine Learning-based systems have achieved an extraordinary performance in a wide range of fields, especially Natural Language Processing and Computer Vision. The construction of appropriate and meaningful explanations for the decisions of those systems, which are in may cases considered black boxes, is a hot research topic. Most of the state-of-the-art explanation systems in the field rely on generating synthetic instances to test them on the black box, analyse the results and build an interpretable model, which is then used to provide an understandable explanation. In this paper we focus on this generation process and propose Guided-LORE, a variant of the well-known LORE method. We conceptualise the problem of neighbourhood generation as a search and solve it using the Uniform Cost Search algorithm. The experimental results on four datasets reveal the effectiveness of our method when compared with the most relevant related works.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Müller, K.R.: How to explain individual classification decisions. J. Mach. Learn. Res. 11, 1803–1831 (2010)

    MathSciNet  MATH  Google Scholar 

  2. Briand, L.C., Brasili, V., Hetmanski, C.J.: Developing interpretable models with optimized set reduction for identifying high-risk software components. IEEE Trans. Softw. Eng. 19(11), 1028–1044 (1993)

    Article  Google Scholar 

  3. Carroll, J.B.: An analytical solution for approximating simple structure in factor analysis. Psychometrika 18(1), 23–38 (1953)

    Article  Google Scholar 

  4. Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., Elhadad, N.: Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1721–1730. KDD 2015, Association for Computing Machinery, New York, NY, USA (2015)

    Google Scholar 

  5. Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., Elhadad, N.: Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1721–1730 (2015)

    Google Scholar 

  6. Craven, M., Shavlik, J.: Extracting tree-structured representations of trained networks. Adv. Neural. Inf. Process. Syst. 8, 24–30 (1995)

    Google Scholar 

  7. Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., Giannotti, F.: Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820 (2018)

  8. Guillaume, S.: Designing fuzzy inference systems from data: an interpretability-oriented review. IEEE Trans. Fuzzy Syst. 9(3), 426–443 (2001)

    Article  Google Scholar 

  9. Krause, J., Perer, A., Ng, K.: Interacting with predictions: visual inspection of black-box machine learning models. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 5686–5697 (2016)

    Google Scholar 

  10. Letham, B., Rudin, C., McCormick, T.H., Madigan, D., et al.: Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model. Ann. Appl. Stat. 9(3), 1350–1371 (2015)

    Article  MathSciNet  Google Scholar 

  11. Lin, J., Keogh, E., Wei, L., Lonardi, S.: Experiencing sax: a novel symbolic representation of time series. Data Min. Knowl. Discov. 15(2), 107–144 (2007)

    Article  MathSciNet  Google Scholar 

  12. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, pp. 4765–4774 (2017)

    Google Scholar 

  13. Molnar, C.: Interpretable Machine Learning. Lulu. com (2020)

    Google Scholar 

  14. Qian, L., Zheng, H., Zhou, H., Qin, R., Li, J.: Classification of time seriesgene expression in clinical studies via integration of biological network. PLoS ONE 8(3), e58383 (2013)

    Article  Google Scholar 

  15. Revelle, W., Rocklin, T.: Very simple structure: an alternative procedure for estimating the optimal number of interpretable factors. Multivar. Behav. Res. 14(4), 403–414 (1979)

    Article  Google Scholar 

  16. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)

    Google Scholar 

  17. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. AAAI 18, 1527–1535 (2018)

    Google Scholar 

  18. Ridgeway, G., Madigan, D., Richardson, T., O’Kane, J.: Interpretable boosted naïve bayes classification. In: KDD, pp. 101–104 (1998)

    Google Scholar 

  19. Schielzeth, H.: Simple means to improve the interpretability of regression coefficients. Methods Ecol. Evol. 1(2), 103–113 (2010)

    Article  Google Scholar 

  20. Strumbelj, E., Kononenko, I.: An efficient explanation of individual classifications using game theory. J. Mach. Learn. Res. 11, 1–18 (2010)

    MathSciNet  MATH  Google Scholar 

  21. Ustun, B., Rudin, C.: Supersparse linear integer models for optimized medical scoring systems. Mach. Learn. 102(3), 349–391 (2015). https://doi.org/10.1007/s10994-015-5528-6

    Article  MathSciNet  MATH  Google Scholar 

  22. Wang, F., Rudin, C.: Falling rule lists. In: Artificial Intelligence and Statistics, pp. 1013–1022 (2015)

    Google Scholar 

  23. Wilson, D.R., Martinez, T.R.: Improved heterogeneous distance functions. J. Artif. Intell. Res. 6, 1–34 (1997)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the support of the FIS project PI18/00169 (Fondos FEDER) and the URV grant 2018-PFR-B2-61. The first author is supported by a URV Martí Franquès predoctoral grant.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Najlaa Maaroof .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Maaroof, N., Moreno, A., Valls, A., Jabreel, M. (2021). Guided-LORE: Improving LORE with a Focused Search of Neighbours. In: Heintz, F., Milano, M., O'Sullivan, B. (eds) Trustworthy AI - Integrating Learning, Optimization and Reasoning. TAILOR 2020. Lecture Notes in Computer Science(), vol 12641. Springer, Cham. https://doi.org/10.1007/978-3-030-73959-1_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-73959-1_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-73958-4

  • Online ISBN: 978-3-030-73959-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics