Skip to main content

Context-Aware Synergetic Multiplex Network for Multi-organ Segmentation of Cervical Cancer MRI

  • Conference paper
  • First Online:
Book cover Predictive Intelligence in Medicine (PRIME 2020)

Abstract

Generative Adversarial Networks (GANs) have increasingly broken records in solving challenging medical image analyses problems such as medical image de-noising, segmentation, detection, classification or reconstruction. However, to the best of our knowledge, they have not been used for female pelvic multi-organ segmentation. Accurate segmentation of uterine cervical cancer (UCC) organs (i.e., bladder, vagina and tumor) from magnetic resonance imaging (MRI) is crucial for effective UCC staging. However, it is a highly challenging task due to 1) noisy MR images, 2) within-subject large variability in structure and intensity of UCC organs, and 3) across-subject variability. More importantly, there have been very limited works on how to aggregate different interactions across MRI views using GANs for multi-organ segmentation while providing context information. In this work, we propose a novel synergetic multiplex network (SMN) using multi-stage deep learning architecture based on cycle-GAN to segment pelvic multi-organ using complementary multi-view MRI, introducing three major contributions in multi-organ segmentation literature: (1) Modeling the interactions across data views using a novel multiplex architecture composed of multiple layers. Each SMN layer nests a cascade of view-specific context-aware cycle-GANs and synergistically communicates context information to other paralleled view-specific layers via multiplex coupling links. (2) SMN captures shared and complementary information between different views to segment UCC in different MRI views. (3) It enforces the spatial consistency between neighboring pixels within the same tissue for UCC segmentation. Specifically, in a gradual and deep manner, the proposed method improves the segmentation results by iteratively providing more refined context information from other views to train the next segmentation cycle-GAN in the SMN layer. We evaluated our SMN framework using 15 T2w-MR sequences with axial and sagittal views. We show that SMN is robust for the UCC segmentation task by significantly (\(p<0.05\)) outperforming comparison segmentation methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bray, F., Ferlay, J., Soerjomataram, I., Siegel, R.L., Torre, L.A., Jemal, A.: Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J. Clin. 68, 394–424 (2018)

    Google Scholar 

  2. He, K., Cao, X., Shi, Y., Nie, D., Gao, Y., Shen, D.: Pelvic organ segmentation using distinctive curve guided fully convolutional networks. IEEE Trans. Med. Imaging 38, 585–595 (2018)

    Article  Google Scholar 

  3. Kazemifar, S., et al.: Segmentation of the prostate and organs at risk in male pelvic CT images using deep learning. Biomed. Phys. Eng. Express 4, 055003 (2018)

    Article  Google Scholar 

  4. Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)

    Article  Google Scholar 

  5. Trullo, R., Petitjean, C., Dubray, B., Ruan, S.: Multiorgan segmentation using distance-aware adversarial networks. J. Med. Imaging 6, 014001 (2019)

    Article  Google Scholar 

  6. Dong, X., et al.: Automatic multiorgan segmentation in thorax CT images using U-net-GAN. Med. Phys. 46, 2157–2168 (2019)

    Article  Google Scholar 

  7. Lei, Y., et al.: Male pelvic multi-organ segmentation aided by CBCT-based synthetic MRI. Phys. Med. Biol. 65, 035013 (2020)

    Article  Google Scholar 

  8. Lei, Y., et al.: Deep learning in multi-organ segmentation. arXiv preprint arXiv:2001.10619 (2020)

  9. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

  10. Huo, Y., Xu, Z., Bao, S., Assad, A., Abramson, R.G., Landman, B.A.: Adversarial synthesis learning enables segmentation without target modality ground truth. In: IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 1217–1220 (2018)

    Google Scholar 

  11. Kang, E., Koo, H.J., Yang, D.H., Seo, J.B., Ye, J.C.: Cycle-consistent adversarial denoising network for multiphase coronary CT angiography. Med. Phys. 46, 550–562 (2019)

    Article  Google Scholar 

  12. Chen, C., Dou, Q., Chen, H., Heng, P.-A.: Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest x-ray segmentation. In: Shi, Y., Suk, H.-I., Liu, M. (eds.) MLMI 2018. LNCS, vol. 11046, pp. 143–151. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00919-9_17

    Chapter  Google Scholar 

  13. Dar, S.U., Yurt, M., Karacan, L., Erdem, A., Erdem, E., Çukur, T.: Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Trans. Med. Imaging 38, 2375–2388 (2019)

    Article  Google Scholar 

  14. Bnouni, N., Rekik, I., Rhim, M.S., Amara, N.E.B.: Dynamic multi-scale CNN forest learning for automatic cervical cancer segmentation. In: Shi, Y., Suk, H.-I., Liu, M. (eds.) MLMI 2018. LNCS, vol. 11046, pp. 19–27. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00919-9_3

    Chapter  Google Scholar 

  15. Badrinarayanan, V., Kendall, A., Cipolla, R.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495 (2017)

    Article  Google Scholar 

  16. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  17. Kanawati, R.: Multiplex network mining: a brief survey. IEEE Intell. Inf. Bull. 16, 24–27 (2015)

    Google Scholar 

  18. Lee, K.M., Kim, J.Y., Lee, S., Goh, K.I.: Multiplex Networks. Networks of Net-works: The Last Frontier of Complexity, pp. 53–72 (2014)

    Google Scholar 

  19. Bnouni, N., Rekik, I., Rhim, M.S., Amara, N.E.B.: Cross-view self-similarity using shared dictionary learning for cervical cancer staging. IEEE Access 7, 30079–30088 (2019)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Islem Rekik .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bnouni, N., Rekik, I., Rhim, M.S., Ben Amara, N.E. (2020). Context-Aware Synergetic Multiplex Network for Multi-organ Segmentation of Cervical Cancer MRI. In: Rekik, I., Adeli, E., Park, S.H., Valdés Hernández, M.d.C. (eds) Predictive Intelligence in Medicine. PRIME 2020. Lecture Notes in Computer Science(), vol 12329. Springer, Cham. https://doi.org/10.1007/978-3-030-59354-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59354-4_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59353-7

  • Online ISBN: 978-3-030-59354-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics