Abstract
The crossMoDA challenge aims to automatically segment the vestibular schwannoma (VS) tumor and cochlea regions of unlabeled high-resolution T2 scans by leveraging labeled contrast-enhanced T1 scans. The 2022 edition extends the segmentation task by including multi-institutional scans. In this work, we proposed an unpaired cross-modality segmentation framework using data augmentation and hybrid convolutional networks. Considering heterogeneous distributions and various image sizes for multi-institutional scans, we apply the min-max normalization for scaling the intensities of all scans between -1 and 1, and use the voxel size resampling and center cropping to obtain fixed-size sub-volumes for training. We adopt two data augmentation methods for effectively learning the semantic information and generating realistic target domain scans: generative and online data augmentation. For generative data augmentation, we use CUT and CycleGAN to generate two groups of realistic T2 volumes with different details and appearances for supervised segmentation training. For online data augmentation, we design a random tumor signal reducing method for simulating the heterogeneity of VS tumor signals. Furthermore, we utilize an advanced hybrid convolutional network with multi-dimensional convolutions to adaptively learn sparse inter-slice information and dense intra-slice information for accurate volumetric segmentation of VS tumor and cochlea regions in anisotropic scans. On the crossMoDA2022 validation dataset, our method produces promising results and achieves the mean DSC values of 72.47% and 76.48% and ASSD values of 3.42 mm and 0.53 mm for VS tumor and cochlea regions, respectively.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Dorent, R., et al.: CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation. Med. Image Anal., 102628 (2022). https://doi.org/10.1016/j.media.2022.102628.
Dorent, R., et al.L Scribble-based domain adaptation via co-segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 479–489 (2020)
Shapey, J., et al.: Segmentation of vestibular schwannoma from MRI, an open annotated dataset and baseline algorithm. Sci. Data 8(1), 286 (2021). https://doi.org/10.1038/s41597-021-01064-w
Shapey, J., et al.: An artificial intelligence framework for automatic segmentation and volumetry of vestibular schwannomas from contrast-enhanced T1-weighted and high-resolution T2-weighted MRI. J. Neurosurg. 134(1), 171–179 (2019)
Wang, G., et al.: Automatic segmentation of vestibular schwannoma from T2-weighted mri by deep spatial attention with hardness-weighted loss. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 264–272. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_30
Dong, Z., et al.: MNet: rethinking 2D/3D Networks for Anisotropic Medical Image Segmentation (2022). http://arxiv.org/abs/2205.04846
Zhu, J.-Y., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)
Park, T., Efros, A.A., Zhang, R., Zhu, J.-Y.: Contrastive learning for unpaired image-to-image translation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) Computer Vision – ECCV 2020: Part IX, pp. 319–345. Springer International Publishing, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_19
Shin, H., Kim, H., Kim, S., Jun, Y., Eo, T., Hwang, D.: COSMOS: cross-modality unsupervised domain adaptation for 3D medical image segmentation based on Target-aware Domain Translation and Iterative Self-Training, arXiv Prepr. http://arxiv.org/abs/2203.16557
Dong, H., Yu, F., Zhao, J., Dong, B., Zhang, L.: Unsupervised Domain Adaptation in Semantic Segmentation Based on Pixel Alignment and Self-Training, pp. 4–8 (2021). http://arxiv.org/abs/2109.14219
Choi, J.W.: Using Out-of-the-Box Frameworks for Unpaired Image Translation and Image Segmentation for the crossMoDA Challenge, pp. 1–5 (2021). http://arxiv.org/abs/2110.01607
Liu, H., Fan, Y., Cui, C., Su, D., McNeil, A., Dawant, B.M.: Unsupervised Domain Adaptation for Vestibular Schwannoma and Cochlea Segmentation via Semi-supervised Learning and Label Fusion, vol. 1, pp. 1–11 (2022). http://arxiv.org/abs/2201.10647
Huo, Y., et al.: Synseg-net: Synthetic segmentation without target modality ground truth. IEEE Trans. Med. Imaging 38(4), 1016–1025 (2018)
Dou, Q., et al.: PnP-AdaNet: Plug-and-play adversarial domain adaptation network at unpaired cross-modality cardiac segmentation. IEEE Access 7, 99065–99076 (2019)
Chen, C., et al.: Unsupervised bidirectional cross-modality adaptation via deeply synergistic image and feature alignment for medical image segmentation. IEEE Trans. Med. Imaging 39(7), 2494–2505 (2020)
Pei, C., Wu, F., Huang, L., Zhuang, X.: Disentangle domain features for cross-modality cardiac image segmentation. Med. Image Anal. 71, 102078 (2021)
Tsai, Y.-H., et al.: Learning to adapt structured output space for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7472–7481 (2018)
Vesal, S., et al.: Adapt Everywhere: Unsupervised Adaptation of Point-Clouds and Entropy Minimization for Multi-Modal Cardiac Image Segmentation. IEEE Trans. Med. Imaging 40(7), 1838–1851 (2021)
Liu, H., et al.: A bidirectional multilayer contrastive adaptation network with anatomical structure preservation for unpaired cross-modality medical image segmentation. Comput. Biol. Med., 105964 (2022)
Yao, K., et al.: A novel 3D unsupervised domain adaptation framework for cross-modality medical image segmentation. IEEE J. Biomed. Heal. Inform. 1 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhuang, Y., Liu, H., Song, E., Cetinkaya, C., Hung, CC. (2023). An Unpaired Cross-Modality Segmentation Framework Using Data Augmentation and Hybrid Convolutional Networks for Segmenting Vestibular Schwannoma and Cochlea. In: Bakas, S., et al. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2022. Lecture Notes in Computer Science, vol 14092. Springer, Cham. https://doi.org/10.1007/978-3-031-44153-0_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-44153-0_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44152-3
Online ISBN: 978-3-031-44153-0
eBook Packages: Computer ScienceComputer Science (R0)