Abstract
Medical images segmentation is an important research area. Physicians and radiologists can diagnose diseases in their patients by observing the visual features using various imaging methods like CT, MRI, X-ray, and Ultrasound. AI-based medical image segmentation models can help radiologists and experts analyze various ailments. However, the prediction results are only trustworthy when the results can be interpreted by a doctor. Our work utilizes “Explainable AI(XAI)." We propose GradXcepUNet, an XAI-based medical image segmentation model, that couples the segmentation power of U-Net and explainability features of the Xception classification network by Grad-CAM. The Grad-CAM trained images highlight the critical regions for the Xception classification network. Then, as the guidance, the visualized results for critical regions are combined with an existing segmentation model (U-Net) to produce the final segmentation results. With the assistance of XAI analysis and visualization, our GradXcepUNet outperforms the original U-Net and many state-of-the-art methods. The evaluation results show that we can reach a Dice coefficient of 97.73% and an Intersection over Union (IoU) score of 78.86% on the 3D-IRCADb-01 database.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Cancer. World Health Organization, World Health Organization, 21 Sept 2021. https://who.int/news-room/fact-sheets/detail/cancer
Holzinger, A., et al.: Towards the augmented pathologist: challenges of explainable-AI in digital pathology. arXiv preprint arXiv:1712.06657 (2017)
Ma, Y., Dong, G., Zhao, C., Basu, A., Wu, Z.: Background subtraction based on principal motion for a freely moving camera. In: McDaniel, T., Berretti, S., Curcio, I.D.D., Basu, A. (eds.) ICSM 2019. LNCS, vol. 12015, pp. 67–78. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-54407-2_6
Ronneberger, O.: Invited talk: u-net convolutional networks for biomedical image segmentation. In: Bildverarbeitung für die Medizin 2017. I, p. 3. Springer, Heidelberg (2017). https://doi.org/10.1007/978-3-662-54345-0_3
Dong, G., Ma, Y., Basu, A.: Feature-guided CNN for denoising images from portable ultrasound devices. IEEE Access 9, 28272–28281 (2021)
Fausto, M., Navab, N., Ahmadi, S.-A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D vision (3DV). IEEE, (2016)
Chen, L.-C., et al.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017)
Kervadec, H., et al.: Boundary loss for highly unbalanced segmentation. In: International Conference on Medical Imaging with Deep Learning, vol. 102, pp. 285-296. PMLR (2019)
Salehi, S.S.M., Erdogmus, D., Gholipour, A.: Tversky loss function for image segmentation using 3D fully convolutional deep networks. In: Wang, Q., Shi, Y., Suk, H.-I., Suzuki, K. (eds.) MLMI 2017. LNCS, vol. 10541, pp. 379–387. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67389-9_44
Wong, K.C.L., Moradi, M., Tang, H., Syeda-Mahmood, T.: 3D segmentation with exponential logarithmic loss for highly unbalanced object sizes. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11072, pp. 612–619. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00931-1_70
Moghbel, M., et al.: Automatic liver segmentation on computed tomography using random walkers for treatment planning. EXCLI J. 15, 500 (2016)
Li, X., et al.: H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans. Med. Imaging 37(12), 2663–2674 (2018)
Isensee, F., et al.: nnU-Net: self-adapting framework for U-Net-based medical image segmentation. arXiv preprint arXiv:1809.10486 (2018)
Chlebus, G., et al.: Automatic liver tumor segmentation in CT with fully convolutional neural networks and object-based postprocessing. Sci. Rep. 8(1), 1–7 (2018)
Christ, P.F., et al.: Automatic liver and tumor segmentation of CT and MRI volumes using cascaded fully convolutional neural networks. arXiv preprint arXiv:1702.05970 (2017)
Diligenti, M., Roychowdhury, S., Gori, M.: Integrating prior knowledge into deep learning. In: 2017 16th IEEE International Conference on Machine Learning and applications (ICMLA), pp. 920-923. IEEE (2017)
Efremova, D.B., et al.: Automatic segmentation of kidney and liver tumors in CT images. arXiv preprint arXiv:1908.01279 (2019)
Jin, Q., et al.: RA-UNet: a hybrid deep attention-aware network to extract liver and tumor in CT scans. Front. Bioeng. Biotechnol. 8, 1471 (2020)
Jiang, H., et al.: AHCNET: an application of attention mechanism and hybrid connection for liver tumor segmentation in CT volumes. IEEE Access 7, 24898–24909 (2019)
Siddique, N., et al.: U-Net and its variants for medical image segmentation: a review of theory and applications. IEEE Access 9, 82031-82057 (2021)
Zeiler, M.D., Fergus, R.: Visualizing and Understanding Convolutional Networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53
Springenberg, J.T., et al.: Striving for simplicity: the all convolutional Net. arXiv preprint arXiv:1412.6806 (2014)
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?" explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)
Kaiser, L., Gomez, A.N., Chollet, F.: Depthwise separable convolutions for neural machine translation. arXiv preprint arXiv:1706.03059 (2017)
Zhou, B., et al.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)
Selvaraju, R.R., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision (2017)
Ye, W., et al.: Weakly supervised lesion localization with probabilistic-CAM pooling. arXiv preprint arXiv:2005.14480 (2020)
Maloca, P.M., et al.: Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial intelligence. Commun. Biol. 4(1), 1–12 (2021)
Jiang, H., et al.: A multi-label deep learning model with interpretable grad-CAM for diabetic retinopathy classification. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 1560-1563. IEEE (2020)
Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kaur, A., Dong, G., Basu, A. (2022). GradXcepUNet: Explainable AI Based Medical Image Segmentation. In: Berretti, S., Su, GM. (eds) Smart Multimedia. ICSM 2022. Lecture Notes in Computer Science, vol 13497. Springer, Cham. https://doi.org/10.1007/978-3-031-22061-6_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-22061-6_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-22060-9
Online ISBN: 978-3-031-22061-6
eBook Packages: Computer ScienceComputer Science (R0)