Skip to main content

ABFNet: Attention Bottlenecks Fusion Network for Multimodal Brain Tumor Segmentation

  • Conference paper
  • First Online:
Pattern Recognition (ACPR 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14407))

Included in the following conference series:

  • 392 Accesses

Abstract

Brain tumor segmentation based on multimodal magnetic resonance imaging (MRI) is a crucial step in the early diagnosis and prognosis evaluation of glioma. Current multimodal brain tumor segmentation algorithms only perform simple concatenation of multimodal information and fail to utilize their complementary information, resulting in limited segmentation accuracy. To address this problem, we design an Attention Bottlenecks Fusion Network (ABFNet) to segment brain tumor based on MRI by fusing multimodal complementary information. Firstly, based on the encoder-decoder architecture, we utilize multiple CNN encoders to extract modality-specific features. Then, a Bottleneck Fusion Transformer (BFT) module is introduced, which uses Transformer-style to fuse information from different modalities through a small number of potential bottleneck tokens, forcing each modality to condense the most necessary information and share it. Lastly, Fusion Connection Gating (FCG) is proposed to fuse modality-specific features from different levels through concatenation and convolutional operations. Multi-scale features utilized in FCG can enrich features and enhance the discriminability of every modality. Experiments are conducted on datasets BraTS2020 and BraTS2018. On BraTS2020, the DSC of our method on complete tumor, core tumor and enhancing tumor are 92.95%, 91.64% and 79.62%. On BraTS2018, the corresponding results are 90.12%, 83.40% and 70.56%. And experiments of modalities missing situations are conducted, showing the robustness and effectiveness of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Maji, D., Sigedar, P., Singh, M.: Attention Res-UNet with Guided Decoder for semantic segmentation of brain tumors. Biomed. Signal Process. Control 71, 103077 (2022)

    Article  Google Scholar 

  2. Tandel, G.S., Biswas, M., Kakde, O.G., Tiwari, A., Suri, H.S., Turk, M., Laird, J.R., Asare, C.K., Ankrah, A.A., Khanna, N., et al.: A review on a deep learning perspective in brain cancer classification. Cancers 11(1), 111 (2019)

    Article  Google Scholar 

  3. Bakas, S., et al.: Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge. arXiv preprint (2018)

    Google Scholar 

  4. Zhang, D., Huang, G., Zhang, Q., Han, J., Han, J., Wang, Y., Yu, Y.: Exploring task structure for brain tumor segmentation from multi-modality mr images. IEEE Trans. Image Process. 29, 9032–9043 (2020)

    Article  MATH  Google Scholar 

  5. Menze, B.H., Jakab, A., Bauer, S., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2014)

    Article  Google Scholar 

  6. Pereira, S., Pinto, A., Alves, V., Silva, C.A.: Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans. Med. Imaging 35(5), 1240–1251 (2016)

    Article  Google Scholar 

  7. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)

    Article  Google Scholar 

  8. Zhou, T., Fu, H., Chen, G., Shen, J., Shao, L.: Hi-net: hybrid-fusion network for multi-modal mr image synthesis. IEEE Trans. Med. Imaging 39(9), 2772–2781 (2020)

    Article  Google Scholar 

  9. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  10. Zhang, J., Lv, X., Zhang, H., Liu, B.: AResU-Net: Attention residual U-Net for brain tumor segmentation. Symmetry 12(5), 721 (2020)

    Article  Google Scholar 

  11. Luu, H.M., Park, S.H.: Extending nn-UNet for brain tumor segmentation. In: Crimi, A., Bakas, S. (eds.) BrainLes 2021. LNCS, vol. 12963, pp. 173–186. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-09002-8_16

    Chapter  Google Scholar 

  12. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49

    Chapter  Google Scholar 

  13. Schlemper, J., et al.: Attention gated networks: learning to leverage salient regions in medical images. Med. Image Anal. 53, 197–207 (2019)

    Article  Google Scholar 

  14. Jia, Q., Shu, H.: BiTr-Unet: a CNN-transformer combined network for MRI brain tumor segmentation. In: Crimi, A., Bakas, S. (eds.) BrainLes 2021. LNCS, vol. 12963, pp. 3–14. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-09002-8_1

    Chapter  Google Scholar 

  15. Hatamizadeh, A., et al.: Unetr: transformers for 3d medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 574–584 (2022)

    Google Scholar 

  16. Wang, W., Chen, C., Ding, M., Yu, H., Zha, S., Li, J.: TransBTS: multimodal brain tumor segmentation using transformer. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12901, pp. 109–119. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87193-2_11

    Chapter  Google Scholar 

  17. Zhang, Y., He, N., Yang, J., Li, Y., Wei, D., et al.: mmFormer: multimodal medical transformer for incomplete multimodal learning of brain tumor segmentation. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) MICCAI 2022. LNCS, vol. 13435, pp. 107–117. Springer, Cham (2022)

    Google Scholar 

  18. Ding, Y., Yu, X., Yang, Y.: RFNet: Region-aware fusion network for incomplete multi-modal brain tumor segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3975–3984 (2021)

    Google Scholar 

  19. Dolz, J., Gopinath, K., Yuan, J., Lombaert, H., Desrosiers, C., Ayed, I.B.: HyperDense-Net: a hyper-densely connected CNN for multi-modal image segmentation. IEEE Trans. Med. Imaging 38(5), 1116–1126 (2018)

    Article  Google Scholar 

  20. Xing, Z., Yu, L., Wan, L., Han, T., Zhu, L.: NestedFormer: nested modality-aware transformer for brain tumor segmentation. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) MICCAI 2022. LNCS, vol. 13435, pp. 140–150. Springer, Cham (2022)

    Google Scholar 

  21. Nagrani, A., Yang, S., Arnab, A., Jansen, A., Schmid, C., Sun, C.: Attention bottlenecks for multimodal fusion. Adv. Neural. Inf. Process. Syst. 34, 14200–14213 (2021)

    Google Scholar 

  22. Jaegle, A., et al.: Perceiver io: a general architecture for structured inputs & outputs (2021)

    Google Scholar 

  23. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems 30 (2017)

    Google Scholar 

  24. Dosovitskiy, A., et al.: An image is worth 16x16 words: Transformers for image recognition at scale (2020)

    Google Scholar 

  25. Tang, W., He, F., Liu, Y., Duan, Y.: Matr: multimodal medical image fusion via multiscale adaptive transformer. IEEE Trans. Image Process. 31, 5134–5149 (2022)

    Article  Google Scholar 

  26. Chen, C., Dou, Q., Jin, Y., Chen, H., Qin, J., Heng, P.-A.: Robust multimodal brain tumor segmentation via feature disentanglement and gated fusion. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11766, pp. 447–456. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32248-9_50

    Chapter  Google Scholar 

  27. Bakas, S., et al.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 4(1), 1–13 (2017)

    Article  Google Scholar 

  28. Xie, Y., Zhang, J., Shen, C., Xia, Y.: CoTr: efficiently bridging CNN and transformer for 3D medical image segmentation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12903, pp. 171–180. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87199-4_16

    Chapter  Google Scholar 

  29. Zhang, Y., et al.: Modality-aware mutual learning for multi-modal medical image segmentation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12901, pp. 589–599. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87193-2_56

    Chapter  Google Scholar 

  30. Dice, L.R.: Measures of the amount of ecologic association between species. Ecology 26(3), 297–302 (1945)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jingliang Cheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, N. et al. (2023). ABFNet: Attention Bottlenecks Fusion Network for Multimodal Brain Tumor Segmentation. In: Lu, H., Blumenstein, M., Cho, SB., Liu, CL., Yagi, Y., Kamiya, T. (eds) Pattern Recognition. ACPR 2023. Lecture Notes in Computer Science, vol 14407. Springer, Cham. https://doi.org/10.1007/978-3-031-47637-2_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-47637-2_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-47636-5

  • Online ISBN: 978-3-031-47637-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics