Skip to main content

Multi-path Feature Fusion and Channel Feature Pyramid for Brain Tumor Segmentation in MRI

  • Conference paper
  • First Online:
Image and Graphics (ICIG 2023)

Abstract

Automated segmentation of gliomas in MRI images is crucial for timely diagnosis and treatment planning. In this paper, we propose an encoder-decoder network for brain tumor MRI image segmentation that can address the problem of imbalanced data classification in brain tumors. Our method introduces a multi-path feature fusion module and a multi-channel feature pyramid module into the U-Net architecture to alleviate the problems of low segmentation accuracy for small targets and insufficient use of multi-scale information. The output feature maps from each level of the encoder are fed into the multi-path feature fusion module to achieve multi-scale feature fusion. Both the encoder and decoder parts of the proposed method use cascaded dilated convolutional modules, which can effectively extract additional feature information without increasing the number of parameters.

In the BraTS'2019 training dataset of 50 samples, our method achieved Dice values of 0.8551, 0.8728, and 0.7906, and Hausdorff values of 2.5693, 1.5845, and 2.7284 for the entire tumor, tumor core, and enhanced tumor, respectively. The proposed encoder-decoder network shows significant performance advantages in brain tumor MRI image segmentation, especially in addressing the problems of low segmentation accuracy for small targets and insufficient use of multi-scale information. Moreover, our method can extract additional feature without increasing the number of parameters, demonstrating good practicality and versatility. These results suggest that our method can provide an effective solution for automated segmentation of MRI brain tumors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Havaei, M., et al.: Brain tumor segmentation with deep neural networks. Med. Image Anal. 35, 18–31 (2017)

    Article  Google Scholar 

  2. Prasoon, A., Petersen, K., Igel, C., Lauze, F., Dam, E., Nielsen, M.: Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network. In: Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N. (eds.) MICCAI 2013. LNCS, vol. 8150, pp. 246–253. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40763-5_31

    Chapter  Google Scholar 

  3. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  4. Zhang, Z., Zhang, X., Peng, C., Xue, X., Sun, J.: ExFuse: enhancing feature fusion for semantic segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 273–288. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01249-6_17

    Chapter  Google Scholar 

  5. Oktay, O., et al.: Attention U-net: learning where to look for the pancreas. In: Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  6. Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J.: UNet++: a nested U-net architecture for medical image segmentation. In: Stoyanov, D., et al. (eds.) DLMIA/ML-CDS - 2018. LNCS, vol. 11045, pp. 3–11. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00889-5_1

    Chapter  Google Scholar 

  7. Chen, C., Liu, X., Ding, M., Zheng, J., Li, J.: 3D dilated multi-fiber network for real-time brain tumor segmentation in MRI. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 184–192. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32248-9_21

    Chapter  Google Scholar 

  8. Chen, X., Liew, J.H., Xiong, W., Chui, C.-K., Ong, S.-H.: Focus, segment and erase: an efficient network for multi-label brain tumor segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11217, pp. 674–689. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01261-8_40

    Chapter  Google Scholar 

  9. Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 833–851. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_49

    Chapter  Google Scholar 

  10. Chen, L.C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 2402–2410 (2017)

    Google Scholar 

  11. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Rabinovich, A.: Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9 (2015)

    Google Scholar 

  12. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818–2826 (2016)

    Google Scholar 

  13. Wang, P., et al.: Understanding convolution for semantic segmentation. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1451–1460 (2018)

    Google Scholar 

  14. Wang, Q., Wu, B., Zhu, P., Li, P., Hu, Q.: ECA-net: efficient channel attention for deep convolutional neural networks. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  15. Wu, Y., Wu, J., Jin, S., Cao, L., Jin, G.: Dense-u-net: dense encoder–decoder network for holographic imaging of 3d particle fields. Opt. Commun.Commun 493, 126970 (2021)

    Article  Google Scholar 

  16. Zhang, Z., Liu, Q., Wang, Y.: Road extraction by deep residual u-net. IEEE Geosci. Remote Sens. Lett.Geosci. Remote Sens. Lett. 15(5), 749–753 (2018)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhengyao Bai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, Y., Bai, Z., You, Y., Liu, X., Xiao, X., Xu, Z. (2023). Multi-path Feature Fusion and Channel Feature Pyramid for Brain Tumor Segmentation in MRI. In: Lu, H., et al. Image and Graphics . ICIG 2023. Lecture Notes in Computer Science, vol 14359. Springer, Cham. https://doi.org/10.1007/978-3-031-46317-4_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-46317-4_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-46316-7

  • Online ISBN: 978-3-031-46317-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics