Skip to main content

Cross-Slice Attention and Evidential Critical Loss for Uncertainty-Aware Prostate Cancer Detection

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2024 (MICCAI 2024)

Abstract

Current deep learning-based models typically analyze medical images in either 2D or 3D albeit disregarding volumetric information or suffering sub-optimal performance due to the anisotropic resolution of MR data. Furthermore, providing an accurate uncertainty estimation is beneficial to clinicians, as it indicates how confident a model is about its prediction. We propose a novel 2.5D cross-slice attention model that utilizes both global and local information, along with an evidential critical loss, to perform evidential deep learning for the detection in MR images of prostate cancer, one of the most common cancers and a leading cause of cancer-related death in men. We perform extensive experiments with our model on two different datasets and achieve state-of-the-art performance in prostate cancer detection along with improved epistemic uncertainty estimation. The implementation of the model is available at https://github.com/aL3x-O-o-Hung/GLCSA_ECLoss.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Appayya, M.B., et al.: National implementation of multi-parametric magnetic resonance imaging for prostate cancer detection–recommendations from a UK consensus meeting. BJU Int. 122(1), 13 (2018)

    Article  Google Scholar 

  2. Bhalerao, M., Thakur, S.: Brain tumor segmentation based on 3D residual U-Net. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, pp. 218–225 (2020)

    Google Scholar 

  3. Cao, H., et al.: Swin-Unet: unet-like pure transformer for medical image segmentation. In: European Conference on Computer Vision, pp. 205–218. Springer, Heidelberg (2022). https://doi.org/10.1007/978-3-031-25066-8_9

  4. Cao, R., et al.: Joint prostate cancer detection and gleason score prediction in mp-MRI via FocalNet. IEEE Trans. Med. Imaging 38(11), 2496–2506 (2019)

    Article  Google Scholar 

  5. Carannante, G., Dera, D., Bouaynaya, N.C., Rasool, G., Fathallah-Shaykh, H.M.: Trustworthy medical segmentation with uncertainty estimation. arXiv preprint arXiv:2111.05978 (2021)

  6. Chen, H., Dou, Q., Yu, L., Qin, J., Heng, P.A.: VoxResNet: deep voxelwise residual networks for brain segmentation from 3D MR images. Neuroimage 170, 446–455 (2018)

    Article  Google Scholar 

  7. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

    Chapter  Google Scholar 

  8. Duran, A., et al.: ProstAttention-Net: a deep attention model for prostate cancer segmentation by aggressiveness in MRI scans. Med. Image Anal. 77, 102347 (2022)

    Article  Google Scholar 

  9. Han, L., Chen, Y., Li, J., Zhong, B., Lei, Y., Sun, M.: Liver segmentation with 2.5D perpendicular UNets. Comput. Electr. Eng. 91, 107118 (2021)

    Google Scholar 

  10. Hatamizadeh, A., et al.: UNETR: transformers for 3D medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 574–584 (2022)

    Google Scholar 

  11. Hosseinzadeh, M., Saha, A., Brand, P., Slootweg, I., de Rooij, M., Huisman, H.: Deep learning-assisted prostate cancer detection on bi-parametric MRI: minimum training data size requirements and effect of prior knowledge. In: European Radiology, pp. 1–11 (2022)

    Google Scholar 

  12. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR, pp. 7132–7141 (2018). https://doi.org/10.1109/CVPR.2018.00745

  13. Hung, A.L.Y., Zheng, H., Miao, Q., Raman, S.S., Terzopoulos, D., Sung, K.: CAT-Net: a cross-slice attention transformer model for prostate zonal segmentation in MRI. IEEE Trans. Med. Imaging 42(1), 291–303 (2022)

    Article  Google Scholar 

  14. Hung, A.L.Y., et al.: CSAM: a 2.5D cross-slice attention module for anisotropic volumetric medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 5923–5932 (2024)

    Google Scholar 

  15. Isensee, Fet al.: nnU-Net: self-adapting framework for U-Net-based medical image segmentation. arXiv preprint arXiv:1809.10486 (2018)

  16. Jia, H., et al.: 3D APA-Net: 3D adversarial pyramid anisotropic convolutional network for prostate segmentation in MR images. IEEE Trans. Med. Imaging 39(2), 447–457 (2019)

    Article  Google Scholar 

  17. Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? arXiv preprint arXiv:1703.04977 (2017)

  18. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

    Google Scholar 

  19. Milletari, F., Navab, N., Ahmadi, S.A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: Fourth International Conference on 3D Vision (3DV), pp. 565–571. IEEE (2016)

    Google Scholar 

  20. Oktay, O., et al.: Attention U-Net: learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018)

  21. Peiris, H., Hayat, M., Chen, Z., Egan, G., Harandi, M.: A robust volumetric transformer for accurate 3D tumor segmentation. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) MICCAI 2022, vol. 13435, pp. 162–172. Springer, Heidelberg (2022). https://doi.org/10.1007/978-3-031-16443-9_16

    Chapter  Google Scholar 

  22. Rawla, P.: Epidemiology of prostate cancer. World J. Oncol. 10(2), 63 (2019)

    Article  Google Scholar 

  23. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  24. Saha, A., et al.: Artificial Intelligence and Radiologists at Prostate Cancer Detection in MRI: The PI-CAI Challenge (Study Protocol) (2022). https://doi.org/10.5281/zenodo.6667655

  25. Sensoy, M., Kaplan, L., Kandemir, M.: Evidential deep learning to quantify classification uncertainty. Adv. Neural Inf. Process. Syst. 31 (2018)

    Google Scholar 

  26. Shafer, G.: A Mathematical Theory of Evidence, vol. 42. Princeton University Press, Princeton (1976)

    Book  Google Scholar 

  27. Shafer, G.: Dempster-Shafer theory. Encycl. Artif. Intell. 1, 330–331 (1992)

    Google Scholar 

  28. Tang, P., Yang, P., Nie, D., Wu, X., Zhou, J., Wang, Y.: Unified medical image segmentation by learning from uncertainty in an end-to-end manner. Knowl.-Based Syst. 241, 108215 (2022)

    Article  Google Scholar 

  29. Turkbey, B., et al.: Prostate imaging reporting and data system version 2.1: 2019 update of prostate imaging reporting and data system version 2. Eur. Urol. 76(3), 340–351 (2019)

    Google Scholar 

  30. Yan, X., Tang, H., Sun, S., Ma, H., Kong, D., Xie, X.: AFTer-UNet: axial fusion transformer UNet for medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3971–3981 (2022)

    Google Scholar 

  31. Zhang, Y., Yuan, L., Wang, Y., Zhang, J.: SAU-Net: efficient 3D spine MRI segmentation using inter-slice attention. In: Medical Imaging With Deep Learning, pp. 903–913. PMLR (2020)

    Google Scholar 

  32. Zheng, H., et al.: AtPCa-Net: anatomical-aware prostate cancer detection network on multi-parametric MRI. Sci. Rep. 14(1), 5740 (2024)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

The research reported herein was funded in part by the National Institutes of Health under grants R01-CA248506 and R01-CA272702 and by the Integrated Diagnostics Program of the Departments of Radiological Sciences and Pathology in the UCLA David Geffen School of Medicine.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alex Ling Yu Hung .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 154 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hung, A.L.Y., Zheng, H., Zhao, K., Pang, K., Terzopoulos, D., Sung, K. (2024). Cross-Slice Attention and Evidential Critical Loss for Uncertainty-Aware Prostate Cancer Detection. In: Linguraru, M.G., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2024. MICCAI 2024. Lecture Notes in Computer Science, vol 15008. Springer, Cham. https://doi.org/10.1007/978-3-031-72111-3_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72111-3_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72110-6

  • Online ISBN: 978-3-031-72111-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics