Skip to main content

Style Enhanced Domain Adaptation Neural Network for Cross-Modality Cervical Tumor Segmentation

  • Conference paper
  • First Online:
Computational Mathematics Modeling in Cancer Analysis (CMMCA 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14243))

  • 270 Accesses

Abstract

Cervical tumor segmentation is an essential step of cervical cancer diagnosis and treatment. Considering that multi-modality data contain more information and are widely available in clinical routine, multi-modality medical image analysis has emerged as a significant field of study. However, annotating tumors for each modality is expensive and time-consuming. Consequently, unsupervised domain adaptation (UDA) has attracted a lot of attention for its ability to achieve excellent performance on unlabeled cross-domain data. Most current UDA methods adapt image translation networks to achieve domain adaptation, however, the generation process may create visual inconsistency and incorrect generation styles due to the instability of generative adversarial networks. Therefore, we propose a novel and efficient method without image translation networks by introducing a style enhancement method into Domain Adversarial Neural Network (DANN)-based model to improve the generalization performance of the shared segmentation network. Experimental results show that our method achieves the best performance on the cross-modality cervical tumor segmentation task compared to current state-of-the-art UDA methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Sala, E., Rockall, A.G., Freeman, S.J., Mitchell, D.G., Reinhold, C.: The added role of MR imaging in treatment stratification of patients with gynecologic malignancies: what the radiologist needs to know. Radiology 266(3), 717–740 (2013)

    Article  Google Scholar 

  2. Lin, Y.C., et al.: Deep learning for fully automated tumor segmentation and extraction of magnetic resonance radiomics features in cervical cancer. Eur. Radiol. 30, 1297–1305 (2020)

    Article  Google Scholar 

  3. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  4. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

    Chapter  Google Scholar 

  5. Chen, T., et al.: A corresponding region fusion framework for multi-modal cervical lesion detection. IEEE/ACM Trans. Comput. Biol. Bioinf. (2022)

    Google Scholar 

  6. Ouyang, J., Adeli, E., Pohl, K.M., Zhao, Q., Zaharchuk, G.: Representation disentanglement for multi-modal brain MRI analysis. In: Feragen, A., Sommer, S., Schnabel, J., Nielsen, M. (eds.) IPMI 2021. LNCS, vol. 12729, pp. 321–333. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78191-0_25

    Chapter  Google Scholar 

  7. Kaur, M., Singh, D.: Multi-modality medical image fusion technique using multi-objective differential evolution based deep neural networks. J. Ambient. Intell. Humaniz. Comput. 12, 2483–2493 (2021)

    Article  Google Scholar 

  8. Korot, E., et al.: Code-free deep learning for multi-modality medical image classification. Nat. Mach. Intell. 3(4), 288–298 (2021)

    Article  Google Scholar 

  9. Wang, K., Zheng, M., Wei, H., Qi, G., Li, Y.: Multi-modality medical image fusion using convolutional neural network and contrast pyramid. Sensors 20(8), 2169 (2020)

    Article  Google Scholar 

  10. Akita, A., et al.: Comparison of T2-weighted and contrast-enhanced T1-weighted MR imaging at 15 t for assessing the local extent of cervical carcinoma. Eur. Radiol. 21, 1850–1857 (2011)

    Article  Google Scholar 

  11. Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: International Conference on Machine Learning, pp. 1180–1189. PMLR (2015)

    Google Scholar 

  12. Han, X., et al.: Deep symmetric adaptation network for cross-modality medical image segmentation. IEEE Trans. Med. Imaging 41(1), 121–132 (2021)

    Article  Google Scholar 

  13. Gholami, A.: A novel domain adaptation framework for medical image segmentation. In: Crimi, A., Bakas, S., Kuijf, H., Keyvan, F., Reyes, M., van Walsum, T. (eds.) BrainLes 2018. LNCS, vol. 11384, pp. 289–298. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11726-9_26

    Chapter  Google Scholar 

  14. Zhang, T., et al.: Noise adaptation generative adversarial network for medical image analysis. IEEE Trans. Med. Imaging 39(4), 1149–1159 (2019)

    Article  Google Scholar 

  15. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

  16. Chen, C., Dou, Q., Chen, H., Qin, J., Heng, P.A.: Unsupervised bidirectional cross-modality adaptation via deeply synergistic image and feature alignment for medical image segmentation. IEEE Trans. Med. Imaging 39(7), 2494–2505 (2020)

    Article  Google Scholar 

  17. Yan, W., et al.: The domain shift problem of medical image segmentation and vendor-adaptation by Unet-GAN. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 623–631. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_69

    Chapter  Google Scholar 

  18. Cheng, Y., Wei, F., Bao, J., Chen, D., Zhang, W.: ADPL: adaptive dual path learning for domain adaptation of semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 45, 9339–9356 (2023)

    Article  Google Scholar 

  19. Kong, L., et al.: Indescribable multi-modal spatial evaluator. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9853–9862 (2023)

    Google Scholar 

  20. Clark, K., et al.: The cancer imaging archive (TCIA): maintaining and operating a public information repository. J. Digit. Imaging 26, 1045–1057 (2013)

    Article  Google Scholar 

  21. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

    Google Scholar 

  22. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176 (2017)

    Google Scholar 

  23. Hoffman, J., et al.: CyCADA: cycle-consistent adversarial domain adaptation. In: International Conference on Machine Learning, pp. 1989–1998. PMLR (2018)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Shenzhen Science and Technology Program of China grant JCYJ20200109115420720, and the National Natural Science Foundation of China (No. U20A20373); and the Youth Innovation Promotion Association CAS (2022365); The authors express sincere gratitude for the support provided by the United Arab Emirates University (UAEU) through the joint collaboration grant number G00003558.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenjian Qin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zheng, B., He, J., Zhu, J., Xie, Y., Zaki, N., Qin, W. (2023). Style Enhanced Domain Adaptation Neural Network for Cross-Modality Cervical Tumor Segmentation. In: Qin, W., Zaki, N., Zhang, F., Wu, J., Yang, F., Li, C. (eds) Computational Mathematics Modeling in Cancer Analysis. CMMCA 2023. Lecture Notes in Computer Science, vol 14243. Springer, Cham. https://doi.org/10.1007/978-3-031-45087-7_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-45087-7_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-45086-0

  • Online ISBN: 978-3-031-45087-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics