Skip to main content

GradXcepUNet: Explainable AI Based Medical Image Segmentation

  • Conference paper
  • First Online:
Smart Multimedia (ICSM 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13497))

Included in the following conference series:

Abstract

Medical images segmentation is an important research area. Physicians and radiologists can diagnose diseases in their patients by observing the visual features using various imaging methods like CT, MRI, X-ray, and Ultrasound. AI-based medical image segmentation models can help radiologists and experts analyze various ailments. However, the prediction results are only trustworthy when the results can be interpreted by a doctor. Our work utilizes “Explainable AI(XAI)." We propose GradXcepUNet, an XAI-based medical image segmentation model, that couples the segmentation power of U-Net and explainability features of the Xception classification network by Grad-CAM. The Grad-CAM trained images highlight the critical regions for the Xception classification network. Then, as the guidance, the visualized results for critical regions are combined with an existing segmentation model (U-Net) to produce the final segmentation results. With the assistance of XAI analysis and visualization, our GradXcepUNet outperforms the original U-Net and many state-of-the-art methods. The evaluation results show that we can reach a Dice coefficient of 97.73% and an Intersection over Union (IoU) score of 78.86% on the 3D-IRCADb-01 database.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cancer. World Health Organization, World Health Organization, 21 Sept 2021. https://who.int/news-room/fact-sheets/detail/cancer

  2. Holzinger, A., et al.: Towards the augmented pathologist: challenges of explainable-AI in digital pathology. arXiv preprint arXiv:1712.06657 (2017)

  3. Ma, Y., Dong, G., Zhao, C., Basu, A., Wu, Z.: Background subtraction based on principal motion for a freely moving camera. In: McDaniel, T., Berretti, S., Curcio, I.D.D., Basu, A. (eds.) ICSM 2019. LNCS, vol. 12015, pp. 67–78. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-54407-2_6

    Chapter  Google Scholar 

  4. Ronneberger, O.: Invited talk: u-net convolutional networks for biomedical image segmentation. In: Bildverarbeitung für die Medizin 2017. I, p. 3. Springer, Heidelberg (2017). https://doi.org/10.1007/978-3-662-54345-0_3

    Chapter  Google Scholar 

  5. Dong, G., Ma, Y., Basu, A.: Feature-guided CNN for denoising images from portable ultrasound devices. IEEE Access 9, 28272–28281 (2021)

    Article  Google Scholar 

  6. Fausto, M., Navab, N., Ahmadi, S.-A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D vision (3DV). IEEE, (2016)

    Google Scholar 

  7. Chen, L.-C., et al.: Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587 (2017)

  8. Kervadec, H., et al.: Boundary loss for highly unbalanced segmentation. In: International Conference on Medical Imaging with Deep Learning, vol. 102, pp. 285-296. PMLR (2019)

    Google Scholar 

  9. Salehi, S.S.M., Erdogmus, D., Gholipour, A.: Tversky loss function for image segmentation using 3D fully convolutional deep networks. In: Wang, Q., Shi, Y., Suk, H.-I., Suzuki, K. (eds.) MLMI 2017. LNCS, vol. 10541, pp. 379–387. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67389-9_44

    Chapter  Google Scholar 

  10. Wong, K.C.L., Moradi, M., Tang, H., Syeda-Mahmood, T.: 3D segmentation with exponential logarithmic loss for highly unbalanced object sizes. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11072, pp. 612–619. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00931-1_70

    Chapter  Google Scholar 

  11. Moghbel, M., et al.: Automatic liver segmentation on computed tomography using random walkers for treatment planning. EXCLI J. 15, 500 (2016)

    Google Scholar 

  12. Li, X., et al.: H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans. Med. Imaging 37(12), 2663–2674 (2018)

    Google Scholar 

  13. Isensee, F., et al.: nnU-Net: self-adapting framework for U-Net-based medical image segmentation. arXiv preprint arXiv:1809.10486 (2018)

  14. Chlebus, G., et al.: Automatic liver tumor segmentation in CT with fully convolutional neural networks and object-based postprocessing. Sci. Rep. 8(1), 1–7 (2018)

    Google Scholar 

  15. Christ, P.F., et al.: Automatic liver and tumor segmentation of CT and MRI volumes using cascaded fully convolutional neural networks. arXiv preprint arXiv:1702.05970 (2017)

  16. Diligenti, M., Roychowdhury, S., Gori, M.: Integrating prior knowledge into deep learning. In: 2017 16th IEEE International Conference on Machine Learning and applications (ICMLA), pp. 920-923. IEEE (2017)

    Google Scholar 

  17. Efremova, D.B., et al.: Automatic segmentation of kidney and liver tumors in CT images. arXiv preprint arXiv:1908.01279 (2019)

  18. Jin, Q., et al.: RA-UNet: a hybrid deep attention-aware network to extract liver and tumor in CT scans. Front. Bioeng. Biotechnol. 8, 1471 (2020)

    Google Scholar 

  19. Jiang, H., et al.: AHCNET: an application of attention mechanism and hybrid connection for liver tumor segmentation in CT volumes. IEEE Access 7, 24898–24909 (2019)

    Google Scholar 

  20. Siddique, N., et al.: U-Net and its variants for medical image segmentation: a review of theory and applications. IEEE Access 9, 82031-82057 (2021)

    Google Scholar 

  21. Zeiler, M.D., Fergus, R.: Visualizing and Understanding Convolutional Networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

  22. Springenberg, J.T., et al.: Striving for simplicity: the all convolutional Net. arXiv preprint arXiv:1412.6806 (2014)

  23. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?" explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)

    Google Scholar 

  24. Kaiser, L., Gomez, A.N., Chollet, F.: Depthwise separable convolutions for neural machine translation. arXiv preprint arXiv:1706.03059 (2017)

  25. Zhou, B., et al.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  26. Selvaraju, R.R., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision (2017)

    Google Scholar 

  27. Ye, W., et al.: Weakly supervised lesion localization with probabilistic-CAM pooling. arXiv preprint arXiv:2005.14480 (2020)

  28. Maloca, P.M., et al.: Unraveling the deep learning gearbox in optical coherence tomography image segmentation towards explainable artificial intelligence. Commun. Biol. 4(1), 1–12 (2021)

    Google Scholar 

  29. Jiang, H., et al.: A multi-label deep learning model with interpretable grad-CAM for diabetic retinopathy classification. In: 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 1560-1563. IEEE (2020)

    Google Scholar 

  30. Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amandeep Kaur .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 217 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kaur, A., Dong, G., Basu, A. (2022). GradXcepUNet: Explainable AI Based Medical Image Segmentation. In: Berretti, S., Su, GM. (eds) Smart Multimedia. ICSM 2022. Lecture Notes in Computer Science, vol 13497. Springer, Cham. https://doi.org/10.1007/978-3-031-22061-6_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-22061-6_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-22060-9

  • Online ISBN: 978-3-031-22061-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics