skip to main content
10.1145/3697355.3697377acmotherconferencesArticle/Chapter ViewAbstractPublication PagesbdiotConference Proceedingsconference-collections
research-article

GlobalLocalSegNet: A Hybrid Model for Complex Medical Image Segmentation Combining Global and Local Features

Published: 12 December 2024 Publication History

Abstract

Complex medical image segmentation holds critical importance in the medical field. However, renowned segmentation models such as U-Net [1] and TransUNet [2] often suffer from diminished performance when dealing with images that have complex backgrounds or high variability. These models also struggle to capture the dependencies between global and local features effectively, limiting their application in complex medical image segmentation tasks. To address these shortcomings, this paper introduces a novel U-shaped model, GlobalLocalSegNet (GLS-Net), specifically designed for precise segmentation of complex medical images. The GLS-Net model comprises two key sub-modules: a transformer model with positional encoding to extract global information, and a convolutional neural network tailored for local information extraction. Moreover, a U-shaped network structure based on the fusion of global and local features was designed to enhance feature extraction and detail capture. The model was tested using three publicly available complex medical image datasets and one non-medical complex image dataset to assess its scalability. A series of comparative experiments in this study confirm the robustness, scalability, and stability of the GLS-Net model.

References

[1]
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation[C]//Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18. Springer International Publishing, 2015: 234-241.
[2]
Sam Chen J, Lu Y, Yu Q, et al. Transunet: Transformers make strong encoders for medical image segmentation[J]. arXiv preprint arXiv:2102.04306, 2021.
[3]
Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[J]. arXiv preprint arXiv:2010.11929, 2020.
[4]
Kim Y . Convolutional Neural Networks for Sentence Classification[J]. Eprint Arxiv, 2014.
[5]
He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.
[6]
Fu J, Liu J, Tian H, et al. Dual attention network for scene segmentation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 3146-3154.
[7]
Szegedy C, Ioffe S, Vanhoucke V, et al. Inception-v4, inception-resnet and the impact of residual connections on learning[C]//Proceedings of the AAAI conference on artificial intelligence. 2017, 31(1).
[8]
Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1-9.
[9]
Woo S, Park J, Lee J Y, et al. Cbam: Convolutional block attention module[C]//Proceedings of the European conference on computer vision (ECCV). 2018: 3-19.
[10]
Chen H, Li C, Wang G, et al. GasHis-Transformer: A multi-scale visual transformer approach for gastric histopathological image detection[J]. Pattern Recognition, 2022, 130: 108827.
[11]
Kirmani, Shad, and Kamesh Madduri. "Spectral Graph Drawing: Building Blocks and Performance Analysis." 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2018.
[12]
Ali H. Abdulwahhab, Noof T. Mahmood, Ali Abdulwahhab Mohammed, Indrit Myderrizi, and Mustafa Hamid Al-Jumaili, "A Review on Medical Image Applications Based on Deep Learning Techniques," Journal of Image and Graphics, Vol. 12, No. 3, pp. 215-227, 2024.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
BDIOT '24: Proceedings of the 2024 8th International Conference on Big Data and Internet of Things
September 2024
412 pages
ISBN:9798400717529
DOI:10.1145/3697355
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 December 2024

Check for updates

Author Tags

  1. Complex medical image segmentation
  2. Feature extraction
  3. Global and local feature fusion
  4. U-shaped network structure

Qualifiers

  • Research-article

Funding Sources

  • Natural Science Foundation of Chongqing, China
  • Natural Science Foundation of Chongqing Education Commission, China

Conference

BDIOT 2024

Acceptance Rates

Overall Acceptance Rate 75 of 136 submissions, 55%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 16
    Total Downloads
  • Downloads (Last 12 months)16
  • Downloads (Last 6 weeks)1
Reflects downloads up to 28 Feb 2025

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media