Abstract
The pandemic of COVID-19 has had a significant impact on global health and is becoming a major international concern. Fortunately, early detection helped decrease its number of deaths. Artificial Intelligence (AI) and Machine Learning (ML) techniques are a new era, where the main objective is no longer to assist experts in decision-making but to improve and increase their capabilities and this is where interpretability comes in. This study aims to address one of the biggest hurdles that AI faces today which is public trust and acceptance due to its black-box strategy. In this paper, we use a deep Convolutional Neural Network (CNN) on chest computed tomography (CT) image data and Support Vector Machine (SVM) and Random Forest (RF) on clinical symptoms data (Bio-data) to diagnose patients positive for COVID-19. Our objective is to present an Explainable AI (XAI) models by using the Local Interpretable Model-agnostic Explanations (LIME) technique to identify positive patients to the virus in an interpreted way. The results are promising and outperformed the state of the art. The CNN model reached an Accuracy and F1-Score of 96% on CT-scan images, and SVM outperformed RF with Accuracy of 90% and Specificity of 91% on Bio-data. The interpretable results of XAI-Img-Model and XAI-Bio-Model, show that LIME explanations help to understand how SVM and CNN black box models behave in making their decision after being trained on different types of COVID-19 dataset. This can significantly increase trust and help experts understand and learn new patterns for the current pandemic.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Shuja, J., Alanazi, E., Alasmary, W., Alashaikh, A.: COVID-19 open source data sets: a comprehensive survey. Appl. Intell. 51(3), 1296–1325 (2021)
Li, J., Guo, X.: Global deployment mappings and challenges of contact-tracing apps for COVID-19. Available at SSRN 3609516 (2020)
Yang, W., et al.: Clinical characteristics and imaging manifestations of the 2019 novel coronavirus disease (COVID-19): a multi-center study in Wenzhou city, Zhejiang, China. J. Infection 80(4), 388–393 (2020)
Mohamadou, Y., Halidou, A., Kapen, P.T.: A review of mathematical modeling, artificial intelligence and datasets used in the study, prediction and management of COVID-19. Appl. Intell. 50(11), 3913–3925 (2020)
Alballa, N., Al-Turaiki, I.: Machine learning approaches in COVID-19 diagnosis, mortality, and severity risk prediction: a review. Inform. Med. Unlocked 24, 100564 (2021)
Mei, X., et al.: Artificial intelligence-enabled rapid diagnosis of patients with COVID-19. Nat. Med. 26(8), 1224–1228 (2020)
Islam, M.Z., Islam, M.M., Asraf, A.: A combined deep CNN-LSTM network for the detection of novel coronavirus (COVID-19) using X-ray images. Inform. Med. Unlocked 20, 100412 (2020)
Ismael, A.M., Şengür, A.: Deep learning approaches for COVID-19 detection based on chest X-ray images. Expert Syst. Appl. 164, 114054 (2021)
Ahmad, M.A., Eckert, C., Teredesai, A.: Interpretable machine learning in healthcare. In: Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, pp. 559–560, August 2018
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144, August 2016
Sarp, S., Kuzlu, M., Wilson, E., Cali, U., Guler, O.: The enlightening role of explainable artificial intelligence in chronic wound classification. Electronics 10(12), 1406 (2021)
Rucco, M., Viticchi, G., Falsetti, L.: Towards personalized diagnosis of glioblastoma in fluid-attenuated inversion recovery (FLAIR) by topological interpretable machine learning. Mathematics 8(5), 770 (2020)
Meldo, A., Utkin, L., Kovalev, M., Kasimov, E.: The natural language explanation algorithms for the lung cancer computer-aided diagnosis system. Artif. Intell. Med. 108, 101952 (2020)
Magesh, P.R., Myloth, R.D., Tom, R.J.: An explainable machine learning model for early detection of Parkinson’s disease using LIME on DaTSCAN imagery. Comput. Biol. Med. 126, 104041 (2020)
Geis, J.R., et al.: Ethics of artificial intelligence in radiology: summary of the joint European and North American multisociety statement. Can. Assoc. Radiol. J. 70(4), 329–334 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Boutorh, A., Rahim, H., Bendoumia, Y. (2022). Explainable AI Models for COVID-19 Diagnosis Using CT-Scan Images and Clinical Data. In: Chicco, D., et al. Computational Intelligence Methods for Bioinformatics and Biostatistics. CIBB 2021. Lecture Notes in Computer Science(), vol 13483. Springer, Cham. https://doi.org/10.1007/978-3-031-20837-9_15
Download citation
DOI: https://doi.org/10.1007/978-3-031-20837-9_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20836-2
Online ISBN: 978-3-031-20837-9
eBook Packages: Computer ScienceComputer Science (R0)