Skip to main content

GLUNet: Global-Local Fusion U-Net for 2D Medical Image Segmentation

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2021 (ICANN 2021)

Abstract

Medical image segmentation is a fundamental technology for computer-aided diagnosis and clinical disease monitoring. Most of existing deep learning-based methods solely focus on the region and position of objects without considering edge information which provides accurate contour of objects and is beneficial to medical image segmentation. In this paper, we propose a novel Global-Local fusion UNet model (GLUNet) to address above problem, which contains a Global Attention Module (GAM) and a Local Edge Detection Module (LEDM). In GAM, we embed residual block and convolution block attention module to capture contextual and spatial information of objects. Meanwhile, to obtain accurate edge information of objects in medical image segmentation, we devise the LEDM to integrate edge information into our model. We also propose a multi-task loss function that combines the segmentation loss and the edge loss together to train our GLUNet. Experimental results demonstrate that our proposed method outperforms the original U-Net method and other state-of-the-art methods for lung segmentation in Computed Tomography (CT) images, cell/nuclei segmentation and vessel segmentation in retinal images.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Data science bowl. https://www.kaggle.com/c/data-science-bowl-2018

  2. Alom, M.Z., Hasan, M., Yakopcic, C., Taha, T.M., Asari, V.K.: Recurrent residual convolutional neural network based on U-Net (R2U-Net) for medical image segmentation. CoRR abs/1802.06955 (2018). http://arxiv.org/abs/1802.06955

  3. Berman, M., Triki, A.R., Blaschko, M.B.: The lovász-softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In: CVPR 2018, pp. 4413–4421. IEEE Computer Society (2018)

    Google Scholar 

  4. Dutta, K.: Densely connected recurrent residual (Dense R2UNet) convolutional neural network for segmentation of lung CT images. CoRR abs/2102.00663 (2021)

    Google Scholar 

  5. Gu, R., et al.: CA-Net: comprehensive attention convolutional neural networks for explainable medical image segmentation. IEEE Trans. Med. Imaging 40(2), 699–711 (2021). https://doi.org/10.1109/TMI.2020.3035253

  6. Huang, W., Zhou, F.: DA-CapsNet: dual attention mechanism capsule network. Sci. Rep. 10(1), 1–13 (2020)

    Google Scholar 

  7. Jgou, S., Drozdzal, M., Vazquez, D., Romero, A., Bengio, Y.: The one hundred layers tiramisu: fully convolutional DenseNets for semantic segmentation. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2016)

    Google Scholar 

  8. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)

    Article  Google Scholar 

  9. Mader, K.S.: 2017 data science bowl. https://www.kaggle.com/kmader/finding-lungs-in-ct-data (2017)

  10. Oktay, O., et al.: Attention U-Net: learning where to look for the pancreas. CoRR abs/1804.03999 (2018). http://arxiv.org/abs/1804.03999

  11. Park, K., Choi, S.H., Lee, J.Y.: M-GAN: retinal blood vessel segmentation by balancing losses through stacked deep fully convolutional networks. IEEE Access 8, 146308–146322 (2020)

    Article  Google Scholar 

  12. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  13. Sahasrabudhe, M., Christodoulidis, S., Salgado, R., Michiels, S., Vakalopoulou, M.: Self-supervised nuclei segmentation in histopathological images using attention (2020)

    Google Scholar 

  14. Shelhamer, E., Long, J., Darrell, T.: Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640–651 (2017). https://doi.org/10.1109/TPAMI.2016.2572683

  15. Staal, J., Abràmoff, M.D., Niemeijer, M., Viergever, M.A., van Ginneken, B.: Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 23(4), 501–509 (2004). https://doi.org/10.1109/TMI.2004.825627

    Article  Google Scholar 

  16. Wei, S., Wang, X., Yan, W., Xiang, B., Zhang, Z.: Deepcontour: a deep convolutional feature learned by positive-sharing loss for contour detection. In: Computer Vision and Pattern Recognition (2015)

    Google Scholar 

  17. Woo, S., Park, J., Lee, J., Kweon, I.S.: CBAM: convolutional block attention module. In: Computer Vision - ECCV 2018–15th European Conference, 8–14 September 2018, Munich, Germany. Lecture Notes in Computer Science, vol. 11211, pp. 3–19. Springer (2018). https://doi.org/10.1007/978-3-030-01234-2_1

  18. Yang, T., Wu, T., Li, L., Zhu, C.: SUD-GAN: deep convolution generative adversarial network combined with short connection and dense block for retinal vessel segmentation. J. Digit. Imaging 33(4), 946–957 (2020)

    Article  Google Scholar 

  19. Zhang, S., Yang, J., Schiele, B.: Occluded pedestrian detection through guided attention in CNNs. In: CVPR 2018, 18–22 June 2018, Salt Lake City, UT, USA, pp. 6995–7003. IEEE Computer Society (2018)

    Google Scholar 

  20. Zhang, W., Cheng, H., Gan, J.: MUNet: a multi-scale U-Net framework for medical image segmentation. In: 2020 International Joint Conference on Neural Networks, IJCNN 2020, 19–24 July 2020, Glasgow, United Kingdom, pp. 1–7. IEEE (2020). https://doi.org/10.1109/IJCNN48605.2020.9206703

  21. Zhang, Z., Fu, H., Dai, H., Shen, J., Pang, Y., Shao, L.: ET-Net: a generic edge-attention guidance network for medical image segmentation. CoRR abs/1907.10936 (2019)

    Google Scholar 

  22. Zhang, Z., Sabuncu, M.R.: Generalized cross entropy loss for training deep neural networks with noisy labels. In: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3–8 December 2018, Montréal, Canada, pp. 8792–8802 (2018)

    Google Scholar 

  23. Zhao, R., Chen, W., Cao, G.: Edge-boosted u-net for 2D medical image segmentation. IEEE Access 7, 171214–171222 (2019). https://doi.org/10.1109/ACCESS.2019.2953727

  24. Zhou, Y., Onder, O.F., Dou, Q., Tsougenis, E., Chen, H., Heng, P.A.: CIA-Net: robust nuclei instance segmentation with contour-aware information aggregation. In: International Conference on Information Processing in Medical Imaging (2019)

    Google Scholar 

  25. Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: UNet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39(6), 1856–1867 (2020)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongyan Quan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, N., Quan, H. (2021). GLUNet: Global-Local Fusion U-Net for 2D Medical Image Segmentation. In: Farkaš, I., Masulli, P., Otte, S., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2021. ICANN 2021. Lecture Notes in Computer Science(), vol 12894. Springer, Cham. https://doi.org/10.1007/978-3-030-86380-7_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86380-7_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86379-1

  • Online ISBN: 978-3-030-86380-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics