skip to main content
10.1145/3627341.3627343acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccvitConference Proceedingsconference-collections
research-article

DA-UNet: Deformable Attention U-Net for Nucleus Segmentation

Published: 15 December 2023 Publication History

Abstract

Abstract: Cell nucleus segmentation plays a significant role in Computer-Aided systems for cancer diagnosis. However, complex visual features, such as blurring and irregular shapes, increase the difficulty of segmentation. This paper proposes a deformable attention U-Net (DA-UNet) to enhance the learning of nucleus complex visual features. Based on the traditional U-Net, we introduce a deformable attention (DA) module, which aims to learn the more suitable shape features by the attention mechanism and deformable convolution. Experiments on the 2018 Data Science Bowl and MoNuSeg datasets show that the proposed DA-UNet can achieve good results.

References

[1]
O. Ronneberger, P.Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in MICCAI, 2015, pp. 234–241.
[2]
Subhashree Mohapatra, Tripti Swarnkar, and Jayashankar Das,“2 - deep convolutional neural network in medical image processing,” in Handbook of deep learning in biomedical engineering, pp. 25–60. 2021.
[3]
Qing Xu, Zhicheng Ma, Na He, and et.al., “Dcsaunet: A deeper and more compact split-attention u-net for medical image segmentation,” arXiv:2202.00972, 2022.
[4]
Jieneng Chen, Yongyi Lu, Qihang Yu, and et.al., “Transnnet: Transformers make strong encoders for medical image segmentation,” arXiv:2102.04306, 2021.
[5]
Ze Liu, Yutong Lin, Yue Cao, and et.al., “Swin transformer: Hierarchical vision transformer using shifted windows,” in ICCV, 2021, pp. 10012–10022.
[6]
Jifeng Dai, Haozhi Qi, Yuwen Xiong, and et.al., “Deformable convolutional networks,” in ICCV, 2017, pp. 764–773.
[7]
Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 3DV, 2016, pp. 565–571.
[8]
Ozan Oktay, Jo Schlemper, Loic Le Folgoc, and et.al., “Attention u-net: Learning where to look for the pancreas,” arXiv:1804.03999, 2018.
[9]
Zihan Li, Yunxiang Li, Qingde Li, and et.al., “Lvit: Language meets vision transformer in medical image segmentation,” arXiv:2206.14718, 2022.

Cited By

View all
  • (2024)Lightweight multi-scale attention group fusion structure for nuclei segmentationThe Journal of Supercomputing10.1007/s11227-024-06710-981:1Online publication date: 22-Nov-2024

Index Terms

  1. DA-UNet: Deformable Attention U-Net for Nucleus Segmentation
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        ICCVIT '23: Proceedings of the 2023 International Conference on Computer, Vision and Intelligent Technology
        August 2023
        378 pages
        ISBN:9798400708701
        DOI:10.1145/3627341
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 15 December 2023

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. Index Terms: Nucleus segmentation
        2. U-Net
        3. deformable attention

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        ICCVIT 2023

        Acceptance Rates

        ICCVIT '23 Paper Acceptance Rate 54 of 142 submissions, 38%;
        Overall Acceptance Rate 54 of 142 submissions, 38%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)38
        • Downloads (Last 6 weeks)2
        Reflects downloads up to 20 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Lightweight multi-scale attention group fusion structure for nuclei segmentationThe Journal of Supercomputing10.1007/s11227-024-06710-981:1Online publication date: 22-Nov-2024

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media