Skip to main content

Towards Evaluation of Explainable Artificial Intelligence in Streaming Data

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2024)

Abstract

This study introduces a method to assess the quality of Explainable Artificial Intelligence (XAI) algorithms in dynamic data streams, concentrating on the fidelity and stability of feature-importance and rule-based explanations. We employ XAI metrics, such as fidelity and Lipschitz Stability, to compare explainers between each other and introduce the Comparative Expert Stability Index (CESI) for benchmarking explainers against domain knowledge. We adopted the aforementioned metrics to the streaming data scenario and tested them in an unsupervised classification scenario with simulated distribution shifts as different classes. The necessity for adaptable explainers in complex scenarios, like failure detection is underscored, stressing the importance of continued research into versatile explanation techniques to enhance XAI system robustness and interpretability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/shap/shap.

  2. 2.

    https://github.com/marcotcr/lime.

  3. 3.

    https://github.com/marcotcr/anchor.

  4. 4.

    https://github.com/sbobek/lux.

References

  1. Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods. In: Proceedings of the 35th International Conference on Machine Learning (ICML 2018), pp. 10–15. Stockholm, Sweden (2018)

    Google Scholar 

  2. Carrillo, G.A.: A model for designing rule-based expert systems (2017)

    Google Scholar 

  3. Cummins, L., et al.: Explainable predictive maintenance: a survey of current methods, challenges and opportunities (2024)

    Google Scholar 

  4. El-Khawaga, G., Elzeki, O., Abuelkheir, M., Reichert, M.: Why should i trust your explanation? an evaluation approach for XAI methods applied to predictive process monitoring results. IEEE Trans. Artif. Intell. PP, 1–15 (2024). https://doi.org/10.1109/TAI.2024.3357041

  5. Elkhawaga, G., Elzeki, O., Abuelkheir, M., Reichert, M.: Evaluating explainable artificial intelligence methods based on feature elimination: a functionality-grounded approach. Electronics 12(7), 1670 (2023). https://doi.org/10.3390/electronics12071670

  6. Hwang, G.H., Chen, B., Huang, S.H.: Development and analysis of an enhanced multi-expert knowledge integration system for designing context-aware ubiquitous learning contents. Int. J. Dist. Educ. Technol. 16(4), 1–16 (2018). https://doi.org/10.4018/IJDET.2018100103

  7. Islam, M.R., Ahmed, M.U., Barua, S., Begum, S.: A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Appl. Sci. 12(3) (2022). https://doi.org/10.3390/app12031353

  8. Palli, A.S., et al.: Online machine learning from non-stationary data streams in the presence of concept drift and class imbalance: a systematic review. J. Inf. Commun. Technol. 23(1), 105–139 (2024). https://doi.org/10.32890/jict2024.23.1.5

  9. Plazas, M., Ramos-Pollán, R., León, F., Martínez, F.: Towards reduction of expert bias on gleason score classification via a semi-supervised deep learning strategy. Proc. SPIE 12096, 120961I (2022). https://doi.org/10.1117/12.2611517

  10. Rogowitz, B., Treinish, L.: How not to lie with visualization. Comput. Phys. 10 (1996). https://doi.org/10.1063/1.4822401

  11. Rojat, T., Puget, R., Filliat, D., Ser, J.D., Gelin, R., Díaz-Rodríguez, N.: Explainable artificial intelligence (XAI) on timeseries data: a survey (2021)

    Google Scholar 

  12. Saarela, M., Geogieva, L.: Robustness, stability, and fidelity of explanations for a deep skin cancer classification model. Appl. Sci. 12(19) (2022). https://doi.org/10.3390/app12199545

  13. Silva, M., Veloso, B., Gama, J.: Predictive maintenance, adversarial autoencoders and explainability, pp. 260–275 (2023). https://doi.org/10.1007/978-3-031-43430-3_16

  14. Tavares, M., et al.: Expert knowledge integration in the data mining process with application to cardiovascular risk assessment. In: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (2015). https://doi.org/10.1109/EMBC.2015.7318909

  15. Veloso, B., Ribeiro, R.P., Gama, J., Pereira, P.M.: The metropt dataset for predictive maintenance. Sci. Data 9(1), 764 (2022)

    Article  Google Scholar 

  16. Vergara-Lluri, M.E., et al.: Significant variability in the identification and reporting of band neutrophils by participants enrolled in the college of American pathologists proficiency testing program: time for a change. Archiv. Pathol. Lab. Med. 148(6), 666–676 (2023). https://doi.org/10.5858/arpa.2023-0015-CP

  17. Vilone, G., Longo, L.: Notions of explainability and evaluation approaches for explainable artificial intelligence. Inf. Fusion 76, 89–106 (2021). https://doi.org/10.1016/j.inffus.2021.05.009

Download references

Acknowledgements

J. Gama and R. Ribeiro acknowledge the project AI-BOOST funded by the European Union under GA No 101135737. The paper is funded from the XPM project funded by the National Science Centre, Poland under the CHIST-ERA programme grant agreement Np. 857925 (NCN UMO-2020/02/Y/ST6/00070). The research has been supported by a grant from the Priority Research Area (DigiWorld) under the Strategic Programme Excellence Initiative at Jagiellonian University. We acknowledge the use of OpenAI’s ChatGPT-4 for reviewing and improving the language and style of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maciej Mozolewski .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

Apart from the funding mentioned in the Acknowledgements, the authors declare no competing interests.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mozolewski, M., Bobek, S., Ribeiro, R.P., Nalepa, G.J., Gama, J. (2024). Towards Evaluation of Explainable Artificial Intelligence in Streaming Data. In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol 2156. Springer, Cham. https://doi.org/10.1007/978-3-031-63803-9_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-63803-9_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-63802-2

  • Online ISBN: 978-3-031-63803-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics