Abstract
The paper focuses on a medical diagnostic procedure supported by decision models generated by suitable tree-based machine learning algorithms like C4.5. The typical result in this situation is represented by a set of trees that should be evaluated by the medical expert. This step is often lengthy because the models may be too detailed and extensive, or the expert is not always 100% available, several experts differ in their opinion. Based on our experience with this type of tasks like diagnostics of Metabolic Syndrome, Mild Cognitive Impairment, or cardiovascular diseases, we have designed and implemented a prototype of a Clinical Decision Support System to improve the tree-based model with selected interpretability methods like LIME, SHAP, and SunBurst interactive visualization. Next, we designed a mechanism containing selected methods from Mul-tiple-Criteria Decision Making (MCDM) and evaluation metrics like functional correctness, usability, stability, and others. We primarily focused on metrics used to evaluate the quality of software products like functional suitability, performance efficiency, usability, etc. Presented proof of concept is further developed into a functional prototype which will be experimentally verified in the form of a pilot study.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Lombrozo, T.: The structure and function of explanations. Trends Cogn. Sci. 10(10), 464–470 (2006). https://doi.org/10.1016/j.tics.2006.08.004
Ribeiro, M.T., Singh, S., Guestrin, C.: ‘Why should I trust you?’ Explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 13–17 August 2016, pp. 1135–1144 (2016). https://doi.org/10.1145/2939672.2939778
Doshi-Velez, F., Kim, B.: Towards A Rigorous Science of Interpretable Machine Learning, no. Ml, pp. 1–13 (2017). https://arxiv.org/pdf/1702.08608.pdf
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019). https://doi.org/10.1016/j.artint.2018.07.007
Kim, B., Khanna, R., Koyejo, O.O.: Examples are not enough, learn to criticize! criticism for interpretability. In: Advances in neural Information Processing Systems, vol. 29 (2016)
Stiglic, G., Kocbek, P., Fijacko, N., Zitnik, M., Verbert, K., Cilar, L.: Interpretability of machine learning-based prediction models in healthcare. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 10(5), 1–13 (2020). https://doi.org/10.1002/widm.1379
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA), pp. 80–89 (2018)
Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 1–34 (2019). https://doi.org/10.3390/electronics8080832
McKelvey, T., Ahmad, M., Teredesai, A., Eckert, C.: Interpretable machine learning in healthcare. In: Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, vol. 19, no. 1 p. 447 (2018)
Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 35–43 (2018). https://doi.org/10.1145/3233231
Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019). https://doi.org/10.1038/s42256-019-0048-x
Lou, Y., Caruana, R., Gehrke, J.: Intelligible models for classification and regression. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 150–158 (2012)
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., Elhadad, N.: Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1721–1730 (2015)
Murdoch, W.J., Singh, C., Kumbier, K., Abbasi-Asl, R., Yu, B.: Definitions, methods, and applications in interpretable machine learning. Proc. Natl. Acad. Sci. U. S. A. 116(44), 22071–22080 (2019). https://doi.org/10.1073/pnas.1900654116
Došilović, F.K., Brčić, M., Hlupić, N.: Explainable artificial intelligence: a survey. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 210–215 (2018). https://doi.org/10.23919/MIPRO.2018.8400040
Dyatlov, I.T.: Manifestation of nonuniversality of lepton interactions in spontaneously violated mirror symmetry. Phys. At. Nucl. 81(2), 236–243 (2018). https://doi.org/10.1134/S1063778818020060
Vellido, A.: The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Comput. Appl. 32(24), 18069–18083 (2019). https://doi.org/10.1007/s00521-019-04051-w
Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI-17 Workshop on Explainable AI, pp. 8–13 (2017). http://www.cs.columbia.edu/~orb/papers/xai_survey_paper_2017.pdf
Elshawi, R., Al-Mallah, M.H., Sakr, S.: On the interpretability of machine learning-based model for predicting hypertension. BMC Med. Inform. Decis. Mak. 19(1), 146 (2019). https://doi.org/10.1186/s12911-019-0874-0
Keil, F.C.: Explanation and understanding. Annu. Rev. Psychol. 57, 227–254 (2006). https://doi.org/10.1146/annurev.psych.57.102904.190100
Kaur, H., Nori, H., Jenkins, S., Caruana, R., Wallach, H., Wortman Vaughan, J.: Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–14 (2020)
Mohseni, S., Ragan, E.: Combating Fake News with Interpretable News Feed Algorithms, no. Swartout 1983 (2018). http://arxiv.org/abs/1811.12349
Mohseni, S., Ragan, E., Hu, X.: Open Issues in Combating Fake News: Interpretability as an Opportunity (2019). http://arxiv.org/abs/1904.03016
Malolan, B., Parekh, A., Kazi, F.: Explainable deep-fake detection using visual interpretability methods. In: 2020 3rd International Conference on Information and Computer Technologies (ICICT), pp. 289–293 (2020). https://doi.org/10.1109/ICICT50521.2020.00051
Trinh, L., Tsang, M., Rambhatla, S., Liu, Y.: Interpretable and trustworthy deepfake detection via dynamic prototypes. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 1973–1983 (2021)
Chen, C., Lin, K., Rudin, C., Shaposhnik, Y., Wang, S., Wang, T.: An Interpretable Model with Globally Consistent Explanations for Credit Risk, pp. 1–10 (2018). http://arxiv.org/abs/1811.12615
Hajek, P.: Interpretable fuzzy rule-based systems for detecting financial statement fraud. In: MacIntyre, J., Maglogiannis, I., Iliadis, L., Pimenidis, E. (eds.) AIAI 2019. IAICT, vol. 559, pp. 425–436. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-19823-7_36
Tan, S., Caruana, R., Hooker, G., Lou, Y.: Distill-and-compare: auditing black-box models using transparent model distillation. In: AIES 2018 - Proceedings of 2018 AAAI/ACM Conference AI, Ethics, Society, pp. 303–310 (2018). https://doi.org/10.1145/3278721.3278725
Soundarajan, S., Clausen, D.L.: Equal Protection Under the Algorithm: A Legal-Inspired Framework for Identifying Discrimination in Machine Learning (2018)
Das, D., Ito, J., Kadowaki, T., Tsuda, K.: An interpretable machine learning model for diagnosis of Alzheimer’s disease. PeerJ 7, e6543 (2019)
Miotto, R., Li, L., Kidd, B.A., Dudley, J.T.: Deep patient: an unsupervised representation to predict the future of patients from the electronic health records. Sci. Rep. 6(1), 26094 (2016). https://doi.org/10.1038/srep26094
Mamoshina, P., Vieira, A., Putin, E., Zhavoronkov, A.: Applications of deep learning in biomedicine. Mol. Pharm. 13(5), 1445–1454 (2016). https://doi.org/10.1021/acs.molpharmaceut.5b00982
Jackups, R., Jr.: Deep learning makes its way to the clinical laboratory. Clin. Chem. 63(12), 1790–1791 (2017). https://doi.org/10.1373/clinchem.2017.280768
Nori, H., Jenkins, S., Koch, P., Caruana, R.: InterpretML: A Unified Framework for Machine Learning Interpretability, pp. 1–8 (2019). http://arxiv.org/abs/1909.09223
Nemati, S., Holder, A., Razmi, F., Stanley, M.D., Clifford, G.D., Buchman, T.G.: An interpretable machine learning model for accurate prediction of sepsis in the ICU. Crit. Care Med. 46(4), 547–553 (2018). https://doi.org/10.1097/CCM.0000000000002936
Wu, H., et al.: Interpretable machine learning for covid-19: an empirical study on severity prediction task. IEEE Trans. Artif. Intell. (2021)
Arik, S., Iantovics, L.B.: Next generation hybrid intelligent medical diagnosis systems. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, E.S. (eds.) Neural Information Processing, pp. 903–912. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70090-8_92
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, vol. 2017-December, no. Section 2, pp. 4766–4775 (2017). https://arxiv.org/pdf/1705.07874.pdf
Stasko, J., Catrambone, R., Guzdial, M., McDonald, K.: An evaluation of space-filling information visualizations for depicting hierarchical structures. Int. J. Hum. Comput. Stud. 53(5), 663–694 (2000). https://doi.org/10.1006/ijhc.2000.0420
Du, M., Liu, N., Hu, X.: Techniques for interpretable machine learning. Commun. ACM 63(1), 68–77 (2019)
Molnar, C.: Interpretable Machine Learning. A Guide for Making Black Box Models Explainable. Book, p. 247 (2019). https://christophm.github.io/interpretable-ml-book
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1 (2018)
Sharma, R., Reddy, N., Kamakshi, V., Krishnan, N.C., Jain, S.: MAIRE - a model-agnostic interpretable rule extraction procedure for explaining classifiers. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2021. LNCS, vol. 12844, pp. 329–349. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-84060-0_21
Lakkaraju, H., Kamar, E., Caruana, R., Leskovec, J.: Faithful and customizable explanations of black box models. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 131–138 (2019). https://doi.org/10.1145/3306618.3314229
Chen, J., Song, L., Wainwright, M.J., Jordan, M.I.: Learning to explain: an information-theoretic perspective on model interpretation. In: 35th International Conference on Machine Learning, ICML 2018, vol. 2, pp. 1386–1418 (2018). https://arxiv.org/pdf/1802.07814.pdf
Kumarakulasinghe, N.B., Blomberg, T., Liu, J., Leao, A.S., Papapetrou, P.: Evaluating local interpretable model-agnostic explanations on clinical machine learning classification models. In: 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems, pp. 7–12 (2020)
Meske, C., Bunde, E.: Transparency and trust in human-AI-interaction: the role of model-agnostic explanations in computer vision-based decision support. In: Degen, H., Reinerman-Jones, L. (eds.) HCII 2020. LNCS, vol. 12217, pp. 54–69. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50334-5_4
Da Cruz, H.F., Schneider, F., Schapranow, M.-P.: Prediction of Acute Kidney Injury in Cardiac Surgery Patients: Interpretation using Local Interpretable Model-agnostic Explanations (2019)
Thomson, W., Roth, A.E.: The Shapley Value: Essays in Honor of Lloyd S. Shapley, vol. 58, no. 229 (1991)
Altarawneh, R., Humayoun, S.R.: Visualizing software structures through enhanced interactive sunburst layout. In: Proceedings of the International Working Conference on Advanced Visual Interfaces (2016)
Pourhomayoun, M., Shakibi, M.: Predicting mortality risk in patients with COVID-19 using machine learning to help medical decision-making. Smart Heal. 20, 100178 (2021). https://doi.org/10.1016/j.smhl.2020.100178
Xu, W., Zhang, J., Zhang, Q., Wei, X.: Risk prediction of type II diabetes based on random forest model. In: 2017 Third International Conference on Advances in Electrical, Electronics, Information, Communication and Bio-Informatics (AEEICB), pp. 382–386 (2017). https://doi.org/10.1109/AEEICB.2017.7972337
Kumar, S., Sahoo, G.: A random forest classifier based on genetic algorithm for cardiovascular diseases diagnosis (research note). Int. J. Eng. 30(11), 1723–1729 (2017)
Khalilia, M., Chakraborty, S., Popescu, M.: Predicting disease risks from highly imbalanced data using random forest. BMC Med. Inform. Decis. Mak. 11(1), 51 (2011). https://doi.org/10.1186/1472-6947-11-51
Yasodhara, A., Asgarian, A., Huang, D., Sobhani, P.: On the trustworthiness of tree ensemble explainability methods. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2021. LNCS, vol. 12844, pp. 293–308. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-84060-0_19
Hancox-Li, L.: Robustness in Machine Learning Explanations: Does It Matter? (2020)
Brooke, J.: SUS-A quick and dirty usability scale. Usability Eval. Ind. 189(194), 4–7 (1996)
Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H.: Causability and explainability of artificial intelligence in medicine. WIREs Data Min. Knowl. Discov. 9(4), e1312 (2019). https://doi.org/10.1002/widm.1312
Holzinger, A., Carrington, A., Müller, H.: Measuring the quality of explanations: the system causability scale (SCS). KI - Künstliche Intelligenz 34(2), 193–198 (2020). https://doi.org/10.1007/s13218-020-00636-z
Fiala, P., Jablonský, J., Maňas, M.: Vícekriteriální rozhodování. Vysoká škola ekonomická v Praze (1994)
Saaty, T.L.: The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation. McGraw-Hill International Book Company (1980)
Hwang, C.L., Yoon, K.: Multiple Attribute Decision Making: Methods and Applications A State-of-the-Art Survey. Springer, Heidelberg (1981). https://doi.org/10.1007/978-3-642-48318-9
Acknowledgements
The work was supported by The Slovak Research and Development Agency under grant no. APVV-20-0232 and The Scientific Grant Agency of the Ministry of Education, Science, Research and Sport of the Slovak Republic under grant no. VEGA 1/0685/2.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 IFIP International Federation for Information Processing
About this paper
Cite this paper
Anderková, V., Babič, F. (2022). How to Reduce the Time Necessary for Evaluation of Tree-Based Models. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2022. Lecture Notes in Computer Science, vol 13480. Springer, Cham. https://doi.org/10.1007/978-3-031-14463-9_19
Download citation
DOI: https://doi.org/10.1007/978-3-031-14463-9_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-14462-2
Online ISBN: 978-3-031-14463-9
eBook Packages: Computer ScienceComputer Science (R0)