Abstract
Segmentation of brain tumors is a critical task for patient disease management. Since this task is time-consuming and subject to inter-expert delineation variation, automatic methods are of significant interest. The Multimodal Brain Tumor Segmentation Challenge (BraTS) has been in place for about a decade and provides a common platform to compare different automatic segmentation algorithms based on multiparametric magnetic resonance imaging (mpMRI) of gliomas. This year the challenge has taken a big step forward by multiplying the total data by approximately 3. We address the image segmentation challenge by developing a network based on a Bridge-Unet and improved with a concatenation of max and average pooling for downsampling, Squeeze-and-Excitation (SE) block, Atrous Spatial Pyramid Pooling (ASSP), and EvoNorm-S0. Our model was trained using the 1251 training cases from the BraTS 2021 challenge and achieved an average Dice similarity coefficient (DSC) of 0.92457, 0.87811 and 0.84094, as well as a 95% Hausdorff distance (HD) of 4.19442, 7.55256 and 14.13390 mm for the whole tumor, tumor core, and enhanced tumor, respectively on the online validation platform composed of 219 cases. Similarly, our solution achieved a DSC of 0.92548, 0.87628 and 0.87122, as well as HD95 of 4.30711, 17.84987 and 12.23361 mm on the test dataset composed of 530 cases. Overall, our approach yielded well balanced performance for each tumor subregion.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Baid, U., et al.: The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification. arXiv:2107.02314 [cs] (2021)
Bakas, S., et al..: Segmentation Labels for the Pre-operative Scans of the TCGA-GBM collection (2017). https://doi.org/10.7937/K9/TCIA.2017.KLXWJJ1Q
Bakas, S.: Segmentation Labels for the Pre-operative Scans of the TCGA-LGG collection (2017). https://doi.org/10.7937/K9/TCIA.2017.GJQ7R0EF
Bakas, S., et al.: Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Scientific Data 4(1), 170117 (2017). https://doi.org/10.1038/sdata.2017.117
Bakas, S., et al.: Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. arXiv:1811.02629 [cs, stat] (2019)
Benzakoun, J., et al.: Anatomical and functional MR imaging to define tumoral boundaries and characterize lesions in neuro-oncology. Cancer Radiotherapie: Journal De La Societe Francaise De Radiotherapie Oncologique 24(5), 453–462 (2020). https://doi.org/10.1016/j.canrad.2020.03.005
Carré, A., et al.: Standardization of brain MR images across machines and protocols: bridging the gap for MRI-based radiomics. Sci. Rep. 10(1), 12340 (2020). https://doi.org/10.1038/s41598-020-69298-z
Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. arXiv:1802.02611 [cs] (2018)
Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. arXiv:1606.06650 [cs] (2016)
Dice, L.R.: Measures of the amount of ecologic association between species. Ecology 26(3), 297–302 (1945). https://doi.org/10.2307/1932409
Gerig, G., Jomier, M., Chakos, M.: Valmet: a new validation tool for assessing and improving 3D Object segmentation. In: Niessen, W.J., Viergever, M.A. (eds.) MICCAI 2001. LNCS, vol. 2208, pp. 516–523. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-45468-3_62
He, T., Zhang, Z., Zhang, H., Zhang, Z., Xie, J., Li, M.: Bag of tricks for image classification with convolutional neural networks. arXiv:1812.01187 [cs] (2018)
Henry, T., et al.: Brain tumor segmentation with self-ensembled, deeply-supervised 3D U-Net neural networks: a BraTS 2020 challenge solution. In: Crimi, A., Bakas, S. (eds.) BrainLes 2020. LNCS, vol. 12658, pp. 327–339. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72084-1_30
Hu, J., Shen, L., Albanie, S., Sun, G., Wu, E.: Squeeze-and-Excitation Networks. arXiv:1709.01507 [cs] (2019)
Huttenlocher, D., Klanderman, G., Rucklidge, W.: Comparing images using the Hausdorff distance. IEEE Trans. Pattern Anal. Mach. Intell. 15(9), 850–863 (1993). https://doi.org/10.1109/34.232073
Isensee, F., Jaeger, P.F., Kohl, S.A.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021). https://doi.org/10.1038/s41592-020-01008-z
Isensee, F., Jäger, P.F., Full, P.M., Vollmuth, P., Maier-Hein, K.H.: nnU-net for brain tumor segmentation. In: Crimi, A., Bakas, S. (eds.) BrainLes 2020. LNCS, vol. 12659, pp. 118–132. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72087-2_11
Isensee, F., Kickingereder, P., Wick, W., Bendszus, M., Maier-Hein, K.H.: Brain tumor segmentation and radiomics survival prediction: contribution to the BRATS 2017 challenge. In: Crimi, A., Bakas, S., Kuijf, H., Menze, B., Reyes, M. (eds.) BrainLes 2017. LNCS, vol. 10670, pp. 287–297. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75238-9_25
Isensee, F., Kickingereder, P., Wick, W., Bendszus, M., Maier-Hein, K.H.: No new-net. In: Crimi, A., Bakas, S., Kuijf, H., Keyvan, F., Reyes, M., van Walsum, T. (eds.) BrainLes 2018. LNCS, vol. 11384, pp. 234–244. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11726-9_21
Jia, H., Cai, W., Huang, H., Xia, Y.: H2NF-net for brain tumor segmentation using multimodal MR imaging: 2nd place solution to BraTS challenge 2020 segmentation task. arXiv:2012.15318 [cs, eess] (2020)
Jiang, H., Cui, Y., Wang, J., Lin, S.: Impact of epidemiological characteristics of supratentorial gliomas in adults brought about by the 2016 world health organization classification of tumors of the central nervous system. Oncotarget 8(12), 20354–20361 (2017). https://doi.org/10.18632/oncotarget.13555
Jiang, Z., Ding, C., Liu, M., Tao, D.: Two-stage cascaded U-Net: 1st place solution to BraTS challenge 2019 segmentation task. In: Crimi, A., Bakas, S. (eds.) BrainLes 2019. LNCS, vol. 11992, pp. 231–241. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-46640-4_22
Jin, W., Fatehi, M., Abhishek, K., Mallya, M., Toyota, B., Hamarneh, G.: Artificial intelligence in glioma imaging: Challenges and advances. J. Neural Eng. 17(2), 021002 (2020). https://doi.org/10.1088/1741-2552/ab8131
Kamnitsas, K., et al.: Ensembles of multiple models and architectures for robust brain tumour segmentation. In: Crimi, A., Bakas, S., Kuijf, H., Menze, B., Reyes, M. (eds.) BrainLes 2017. LNCS, vol. 10670, pp. 450–462. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75238-9_38
Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017). https://doi.org/10.1016/j.media.2016.10.004
Liu, H., Brock, A., Simonyan, K., Le, Q.V.: Evolving normalization-activation layers. arXiv:2004.02967 [cs, stat] (2020)
Liu, L., et al.: On the variance of the adaptive learning rate and beyond. arXiv:1908.03265 [cs, stat] (2020)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. arXiv:1411.4038 [cs] (2015)
Ma, N., et al.: Project-MONAI/MONAI: 0.6.0. Zenodo (2021). https://doi.org/10.5281/ZENODO.4323058
Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2015). https://doi.org/10.1109/TMI.2014.2377694
Milletari, F., Navab, N., Ahmadi, S.A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, October 2016, pp. 565–571. IEEE (2016). https://doi.org/10.1109/3DV.2016.79
Myronenko, A.: 3D MRI brain tumor segmentation using autoencoder regularization. In: Crimi, A., Bakas, S., Kuijf, H., Keyvan, F., Reyes, M., van Walsum, T. (eds.) BrainLes 2018. LNCS, vol. 11384, pp. 311–320. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11726-9_28
Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: ICML, January 2010
Oktay, O., et al.: Attention U-Net: learning where to look for the pancreas. arXiv:1804.03999 [cs] (2018)
Ostrom, Q.T., Gittleman, H., Stetson, L., Virk, S.M., Barnholtz-Sloan, J.S.: Epidemiology of gliomas. In: Raizer, J., Parsa, A. (eds.) Current Understanding and Treatment of Gliomas. CTR, vol. 163, pp. 1–14. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-12048-5_1
Pang, Y., Li, Y., Shen, J., Shao, L.: Towards bridging semantic gap to improve semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4230–4239 (2019)
Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. arXiv:1912.01703 [cs, stat] (2019)
Qin, X., Zhang, Z., Huang, C., Dehghan, M., Zaiane, O.R., Jagersand, M.: U2-Net: Going deeper with nested U-structure for salient object detection. Pattern Recogn. 106, 107404 (2020). https://doi.org/10.1016/j.patcog.2020.107404
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. arXiv:1505.04597 [cs] (2015)
Tamimi, A.F., Juweid, M.: Epidemiology and outcome of glioblastoma. In: De Vleeschouwer, S. (ed.) Glioblastoma. Codon Publications, Brisbane (AU) (2017)
Wang, G., et al.: A noise-robust framework for automatic segmentation of COVID-19 pneumonia lesions from CT images. IEEE Trans. Med. Imaging 39(8), 2653–2663 (2020). https://doi.org/10.1109/TMI.2020.3000314
Wu, Y., He, K.: Group normalization. arXiv:1803.08494 [cs] (2018)
Zhang, D., Lu, G.: Review of shape representation and description techniques. Pattern Recogn. 37(1), 1–19 (2004). https://doi.org/10.1016/j.patcog.2003.07.008
Zhang, M.R., Lucas, J., Hinton, G., Ba, J.: Lookahead optimizer: K steps forward, 1 step back. arXiv:1907.08610 [cs, stat] (2019)
Acknowledgements
We would like to acknowledge Y. Boursin, M. Deloger, J.P. Larmarque and Gustave Roussy Cancer Campus DTNSI team for providing the infrastructure resources used in this work.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Carré, A., Deutsch, E., Robert, C. (2022). Automatic Brain Tumor Segmentation with a Bridge-Unet Deeply Supervised Enhanced with Downsampling Pooling Combination, Atrous Spatial Pyramid Pooling, Squeeze-and-Excitation and EvoNorm. In: Crimi, A., Bakas, S. (eds) Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2021. Lecture Notes in Computer Science, vol 12963. Springer, Cham. https://doi.org/10.1007/978-3-031-09002-8_23
Download citation
DOI: https://doi.org/10.1007/978-3-031-09002-8_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-09001-1
Online ISBN: 978-3-031-09002-8
eBook Packages: Computer ScienceComputer Science (R0)