Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Syed Ihtesham Hussain Shah 1 ; Annette Ten Teije 1 and José Volders 2

Affiliations: 1 Faculty of Science, Department of Computer Science, Vrije Universiteit Amsterdam, Netherlands ; 2 Diakonessenhuis, Netherlands

Keyword(s): Explainable AI, LIME, SHAP, Breast Cancer, Healthcare.

Abstract: Explainable AI (XAI) assist clinicians and researcher in understanding the rationale behind the predictions made by data-driven models which helps them to make informed decisions and trust the model’s outputs. Providing accurate explanations for breast cancer treatment predictions in the context of highly imbalanced, multiclass-multioutput classification problem is extremely challenging. The aim of this study is to perform a comprehensive and detailed analysis of the explanations generated by post-hoc explanatory methods: Local Interpretable Model-agnostic Explanation (LIME) and SHaply Additive exPlanations (SHAP) for breast cancer treatment prediction using highly imbalanced oncologycal dataset. We introduced evaluation matrices including consistency, fidelity, alignment with established clinical guidelines and qualitative analysis to evaluate the effectiveness and faithfulness of these methods. By examining the strengths and limitations of LIME and SHAP, we aim to determine their s uitability for supporting clinical decision making in multifaceted treatments and complex scenarios. Our findings provide important insights into the use of these explanation methods, highlighting the importance of transparent and robust predictive models. This experiment showed that SHAP perform better than LIME in term of fidelity and by providing more stable explanation that are better aligned with medical guidelines. This work provides guidance to practitioners and model developers in selecting the most suitable explanation technique to promote trust and enhance understanding in predictive healthcare models. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.15.223.232

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Shah, S. I. H., Teije, A. T. and Volders, J. (2025). Can We Trust Explanation! Evaluation of Model-Agnostic Explanation Techniques on Highly Imbalanced, Multiclass-Multioutput Classification Problem. In Proceedings of the 18th International Joint Conference on Biomedical Engineering Systems and Technologies - HEALTHINF; ISBN 978-989-758-731-3; ISSN 2184-4305, SciTePress, pages 530-539. DOI: 10.5220/0013157400003911

@conference{healthinf25,
author={Syed Ihtesham Hussain Shah and Annette Ten Teije and José Volders},
title={Can We Trust Explanation! Evaluation of Model-Agnostic Explanation Techniques on Highly Imbalanced, Multiclass-Multioutput Classification Problem},
booktitle={Proceedings of the 18th International Joint Conference on Biomedical Engineering Systems and Technologies - HEALTHINF},
year={2025},
pages={530-539},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013157400003911},
isbn={978-989-758-731-3},
issn={2184-4305},
}

TY - CONF

JO - Proceedings of the 18th International Joint Conference on Biomedical Engineering Systems and Technologies - HEALTHINF
TI - Can We Trust Explanation! Evaluation of Model-Agnostic Explanation Techniques on Highly Imbalanced, Multiclass-Multioutput Classification Problem
SN - 978-989-758-731-3
IS - 2184-4305
AU - Shah, S.
AU - Teije, A.
AU - Volders, J.
PY - 2025
SP - 530
EP - 539
DO - 10.5220/0013157400003911
PB - SciTePress