Skip to main content

MidFusNet: Mid-dense Fusion Network for Multi-modal Brain MRI Segmentation

  • Conference paper
  • First Online:
Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries (BrainLes 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13769))

Included in the following conference series:

  • 305 Accesses

Abstract

The fusion of multi-modality information has proved effective at improving the segmentation results of targeted regions (e.g., tumours, lesions or organs) of medical images. In particular, layer-level fusion represented by DenseNet has demonstrated a promising level of performance for various medical segmentation tasks. Using stroke and infant brain segmentation as example of ongoing challenging applications involving multi-modal images, we investigate whether it is possible to create a more effective of parsimonious fusion architecture based on the state-of-art fusion network - HyperDenseNet. Our hypothesis is that by fully fusing features throughout the entire network from different modalities, this not only increases network computation complexity but also interferes with the unique feature learning of each modality. Nine new network variants involving different fusion points and mechanisms are proposed. Their performances are evaluated on public datasets including iSeg-2017 and ISLES15-SSIS and an acute stroke lesion dataset collected by medical professionals. The experiment results show that of the nine proposed variants, the ‘mid-dense’ fusion network (named as MidFusNet) is able to achieve a performance comparable to the state-of-art fusion architecture, but with a much more parsimonious network (i.e.,  3.5 million parameters less compared to the baseline network for three modalities).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/Norika2020/MidFusNet

  2. 2.

    https://github.com/DLTK/DLTK

  3. 3.

    https://github.com/josedolz/HyperDenseNet

References

  1. Oppenheim, C., et al.: Tips and traps in brain mri: applications to vascular disorders. Diagnostic Inter. Imaging 93(12), 935–948 (2012)

    Google Scholar 

  2. Havaei, M., et al.: Brain tumor segmentation with deep neural networks. Med. Image Anal. 35, 18–31 (2017)

    Article  Google Scholar 

  3. Kamnitsas, K., et al.: Efficient multi-scale 3d CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017)

    Article  Google Scholar 

  4. Lavdas, I., et al.: Fully automatic, multiorgan segmentation in normal whole body magnetic resonance imaging (MRI), using classification forests (CFs), convolutional neural networks (CNNs), and a multi-atlas (MA) approach. Med. Phys. 44(10), 5210–5220 (2017)

    Article  Google Scholar 

  5. Moeskops, P., Viergever, M.A., Mendrik, A.M., de Vries, L.S., Benders, M.J., Išgum, I.: Automatic segmentation of mr brain images with a convolutional neural network. IEEE Trans. Med. Imaging 35(5), 1252–1261 (2016)

    Article  Google Scholar 

  6. Valverde, S., et al.: Improving automated multiple sclerosis lesion segmentation with a cascaded 3d convolutional neural network approach. Neuroimage 155, 159–168 (2017)

    Article  Google Scholar 

  7. Zhang, W., et al.: Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. Neuroimage 108, 214–224 (2015)

    Article  Google Scholar 

  8. Guo, Z., Li, X., Huang, H., Guo, N., Li, Q.: Deep learning-based image segmentation on multimodal medical imaging. IEEE Trans. Radiation Plasma Med. Sci. 3(2), 162–169 (2019)

    Article  Google Scholar 

  9. Cai, H., Verma, R., Ou, Y., Lee, S., Melhem, E.R., Davatzikos, C.: Probabilistic segmentation of brain tumors based on multi-modality magnetic resonance images. In: 4th IEEE International Symposium on Biomedical Imaging, pp. 600–603 (2007)

    Google Scholar 

  10. Klein, S., van der Heide, U.A., Lips, I.M., van Vulpen, M., Staring, M., Pluim, J.P.: Automatic segmentation of the prostate in 3d mr images by atlas matching using localized mutual information. Med. Phys. 35(4), 1407–1417 (2008)

    Article  Google Scholar 

  11. Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark. IEEE Trans. Med. Imaging 34(10), 1993–2024 (2015)

    Article  Google Scholar 

  12. Bhatnagar, G., Wu, Q.M.J., Liu, Z.: Directive contrast based multimodal medical image fusion in nsct domain. IEEE Trans. Multimedia 15(5), 1014–1024 (2013)

    Article  Google Scholar 

  13. Singh, R., Khare, A.: Fusion of multimodal medical images using daubechies complex wavelet transform — a multiresolution approach. Inf. Fusion 19, 49–60 (2014)

    Article  Google Scholar 

  14. Yang, Y.: Multimodal medical image fusion through a new dwt based technique. In: Proceeding of 4th International Conference on Bioinformatatics and Biomedical Engineering, pp. 1–4 (2010)

    Google Scholar 

  15. Zhu, X., Suk, H.I., Lee, S.W., Shen, D.: Subspace regularized sparse multitask learning for multiclass neurodegenerative disease identification. IEEE Trans. Biomed. Eng. 63(3), 607–618 (2016)

    Article  Google Scholar 

  16. Nie, D., Wang, L., Gao, Y., Shen, D.: Fully convolutional networks for multi-modality isointense infant brain image segmentation. In: Biomedical Imaging (ISBI). IEEE 13th International Symposium on, pp. 1342–1345 (2016)

    Google Scholar 

  17. Tseng, K.L., Lin, Y.L., Hsu, W., Huang, C.Y.: Joint sequence learning and cross-modality convolution for 3d biomedical segmentation. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2017), pp. 3739–3746 (2017)

    Google Scholar 

  18. Aygun, M., Sahin, Y.H., Unal, G.: Multimodal Convolutional Neural Networks for Brain Tumor Segmentation (2018). arXiv preprint:1809.06191

    Google Scholar 

  19. Chen, Y., Chen, J., Wei, D., Li, Y., Zheng, Y.: Octopusnet: a deep learning segmentation network for multi-modal medical images. In: Multiscale Multimodal Medical Imaging (MMMI 2019), Lecture Notes in Computer Science 11977 (2019)

    Google Scholar 

  20. Dolz, J., Ben Ayed, I., Yuan, J., Desrosiers, C.: Isointense infant brain segmentation with a hyper-dense connected convolutional neural network. In: IEEE 15th International Symposium on Biomedical Imaging (ISBI), pp. 616–620 (2018)

    Google Scholar 

  21. Dolz, J., Gopinath, K., Yuan, J., Lombaert, H., Desrosiers, C., Ben Ayed, I.: Hyperdensenet: a hyper-densely connected cnn for multi-modal image segmentation. IEEE Trans. Med. Imaging 38(5), 1116–1126 (2019)

    Article  Google Scholar 

  22. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Computer Vision and Pattern Recognition. CVPR 2017. IEEE Computer Society Conference on, pp. 2261–2269 (2017)

    Google Scholar 

  23. Maier, O., et al.: ISLES 2015 - a public evaluation benchmark for ischemic stroke lesion segmentation from multispectral mri. Med. Image Anal. 35, 250–269 (2017)

    Article  Google Scholar 

  24. Wang, L., et al.: Benchmark on au-tomatic 6-month-old infant brain segmentation algorithms: the iseg-2017 challenge. IEEE Trans. Med. Imaging 38, 2219–2230 (2019)

    Article  Google Scholar 

  25. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger. O.: 3d u-net: learning dense volumetric segmentation from sparse annotation.In: Ourselin, S., Joskowicz, L., Sabuncu, M., Unal, G., Wells, W. (eds): Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016. Lecture Notes in Computer Science, p. 9901 (2016). https://doi.org/10.1007/978-3-319-46723-8_49

  26. Pawlowski, N., et al.: Dltk: State of the Art Reference Implementations for Deep Learning on Medical Images (2017). arXiv preprint arXiv:1711.06853

  27. Kamnitsas, K., Chen, L., Ledig, C., Rueckert, D., Glocker, B.: Multi-scale 3d convolutional neural networks for lesion segmentation in brain mri. In: Ischemic Stroke Lesion Segmentation - MICCAI, pp. 13–16 (2015)

    Google Scholar 

  28. Feng, C., Zhao, D., Huang, M.: Segmentation of stroke lesions in multi-spectral mr images using bias correction embedded fcm and three phase level sets. In: Ischemic Stroke Lesion Segmentation – MICCAI (2015)

    Google Scholar 

  29. Halme, H., Korvenoja, A., Salli, E.: Segmentation of stroke lesion using spatial normalisation, random forest classification and contextual clustering. In: Ischemic Stroke Lesion Segmentation - MICCAI (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenting Duan .

Editor information

Editors and Affiliations

Appendices

Appendix A. Architecture Layout of the Baseline Network

Fig. 5.
figure 5

Baseline architecture layout in the case of three imaging modality. The feature map generated by each convolutional block is colour coded. Top stream is represented using shades of blue; middle stream is colour coded using shades of red; and shades of green are used for the bottom stream. As the layer goes deeper the feature map’s colour goes deeper too. The stacked feature maps show how the dense connection happens and the unique shuffling for concatenation along each modality path. (Architecture drawing adapted and modified from [21])

Appendix B. Detailed Parameter Setting of Proposed Networks

Table 6. The parameters associated with each convolutional layer in the baseline and proposed networks. The number of kernels and processed 3D image output are kept the same for all networks. Major difference occurs when interleaving concatenation happens which leads to varying input size along the feature channels. Notations: CB - convolutional block; FC - fully convolutional layer; No. k – Number of kernels; No. c – Number of classes; v1 - mid-dense; v2 - late-dense; v3 - skip-dense; v4 - mid5-dense; v5 - mid2-dense; v6 - mid3-dense; v7 - midx-dense; v8 - midm-dense; and v9 - mids-dense.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Duan, W., Zhang, L., Colman, J., Gulli, G., Ye, X. (2023). MidFusNet: Mid-dense Fusion Network for Multi-modal Brain MRI Segmentation. In: Bakas, S., et al. Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2022. Lecture Notes in Computer Science, vol 13769. Springer, Cham. https://doi.org/10.1007/978-3-031-33842-7_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-33842-7_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-33841-0

  • Online ISBN: 978-3-031-33842-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics