GDF-Net: A multi-task symmetrical network for retinal vessel segmentation

https://doi.org/10.1016/j.bspc.2022.104426Get rights and content

Highlights

  • A multi-task symmetric network is proposed for fundus vessels segmentation.

  • Two symmetric networks are proposed to extract global and detail features respectively.

  • An attention fusion network is proposed for merging multi-source information.

Abstract

Retinal fundus vessels contain rich geometric features, including both thick and thin vessels, which is particularly important for accurate clinical diagnosis of cardiovascular diseases. Nowadays, deep convolution neural networks (DCNNs) have shown remarkable performance on fundus vessel segmentation combined with effective contextual feature expression ability, especially U-Net network and its variants. However, due to multiple convolution and pooling operations, existing methods using DCNNs will cause the information loss of details or micro objects, which impacts the accuracy of thin vascular detection and cause poor final segmentation performance. To address this problem, a multi-task symmetric network, called GDF-Net, is proposed for accurate retinal fundus image segmentation, which is composed by two symmetrical segmentation networks (global segmentation network branch and detail enhancement network branch) and a fusion network branch. To address the information loss issue, two symmetrical segmentation networks are proposed to extract global contextual features and detail features respectively. To strength both advantages of two symmetric segmentation networks, this paper presented a fusion network to perform feature integration to improve the segmentation accuracy for retinal fundus vessels. To better validate the usefulness and excellence of the GDF-Net, experiments demonstrate that the GDF-Net has achieved a competitive performance on retinal vessel segmentation.

Introduction

The eye diseases will cause blindness and cause great harm to the patient’s body and mind. The retinal fundus image usually contains rich geometric features, which are crucial for accurate clinical diagnosis [1]. Ophthalmologists could use these features to diagnose most of eye diseases [2]. To effectively reduce the workloads of ophthalmologists and prevent the occurrence of blindness, the computer-aided diagnosis could provide an efficient diagnosis, detection and treatment scheme, which relies on effective and accurate retinal fundus segmentation. However, the retinal vessels are always against challenging factors, such as complicated morphology, poor contrast and class imbalanced issue which bring some challenge to automatic segmentation of retinal fundus vessels. Therefore, how to develop an effective retinal fundus segmentation method is a basic and meaningful work for clinical diagnosis.

In early times, fundus images are manually annotated with the help of experienced experts, which requires much time and energy. Meanwhile, the annotation results by different experts may be inconsistent. Much automated and semi-automated image segmentation methods have emerged for fundus image segmentation to greatly improve the segmentation efficiency and precision as time goes on. Conventional model-based methods realize fundus vessel segmentation based on special detection laws, including matched filtering [3], blood vessel tracking [4], edge-based methods [5], region-based methods, etc. Faced with the poor local contrast of fundus blood vessels, Chaudhuri et al. introduced an optical and spatial basis operator, and proposed a matched filter method for retinal fundus segmentation [3]. Based on blood vessel tracking method, Aylward et al. used intensity ridges to approximate the blood vessel axis for retinal fundus segmentation [6]. Toledo et al. introduced a new deformable eigensnake model to segment slender structures in a probabilistic framework [7]. However, the design of detection laws relies on the special experienced experts. Further, model-based methods are always has poor generalization ability, especially on some challenging datasets.

In recent years, DCNNs have got fast development and could extract abstract and high-dimensional features, which have proven good performance on image segmentation. Shelhamer et al. presented a full convolutional network (FCN) for image segmentation, which could accept inputs with any size and generate corresponding output through effective inference and learning [8]. However, due to the huge amount of parameters in FCN, the amount of calculation had increased to affect the computing efficiency. Badrinarayanan et al. proposed a SegNet with the max-pooling indices [9]. For the feature upsampling, the indices in maxpooling were also copied, which reduced the model parameters, and it was easy to be trained. Papandreou et al. proposed the concept of dilated convolution, which increased the receptive field by injecting holes in the standard convolution, which could avoid the information loss without the increase of calculation cost [10]. Ronneberger et al. put forward the U-Net network for high-precision medical image segmentation, which included a contraction path and a symmetrical expansion path [11]. The skip connections were also adopted in U-Net to combine low-resolution with high-resolution features. U-Net network showed superior performance on diverse tasks. Guo et al. proposed a spatial attention U-Net model for retinal vessel segmentation [12]. To make full use of multiscale information, Shi et al. proposed a multiscale dense network (MD-Net) [13]. Meanwhile, the Squeeze-and-Excitation block was also introduced into the MD-Net to emphasize key feature channels.

Encouraged by U-Net network, fused with different schemes, much its variants have been proposed, such as transfer learning [14], attention block [15], multi-feature fusion [16], loss optimization [17], etc. Laibacher et al. proposed a M2U-Net model to achieve real-time segmentation [18]. A pre-training network was introduced into the encoder unit for effective feature expression, and the bilinear sampling was applied into the decoder unit to reduce the amount of parameters, and the parameter number was greatly reduced from with 31.03 M to 0.55 M. Combined with the Squeeze-and-Excitation block, Di et al. proposed a novel residual model to deal with the class imbalanced problem [19]. Facing with multiscale objects, Wu et al. proposed an adaptively adjusted receptive field to obtain the receptive fields with different sizes [20]. Wang et al. built a feature refinement path to improve feature expression and recognition ability [21]. For alleviating the effect of imbalance pixels among thick and thin blood vessels, Yan et al. proposed a joint loss to ensure that the proposed segmentation network could learn more subtle features [22]. Mishra et al. introduced a deep supervision module into the segmentation network which introduced a supervision layer to increase the attention on fine blood vessels [23]. U-Net network and its variants have shown a remarkable performance on retinal fundus images, they still have some shortcomings in highly accurate retinal vessel segmentation because of losing information, insufficiency dealing with local feature maps. In particular, the information loss caused by continuous downsampling operations greatly affects the accurate detection of thin blood vessels.

Considering the above discussion, in the face of the various signals-to-noise ratios of vessels in fundus images, a novel multi-task symmetric model GDF-Net is put forward for fundus vessel segmentation. It consists of a global segmentation network branch, a detail enhancement network branch and an attention-based fusion network branch. Experiments demonstrates the robustness of GDF-Net. Main contributions are as below.

(1) Taking with deep learning model, a multi-task symmetric network is proposed for fundus vessels segmentation.

(2) For solving the problem of information loss resulting from the pooling layers, two symmetric networks are proposed to extract global features and detail features respectively.

(3) An attention fusion network is employed for merging multi-source information for increasing the detection accuracy of retinal fundus vessels.

The rest is organized as follows. Firstly, Section 2 is about the details of the proposed GDF-Net segmentation network. Next, Section 3 describes the experimental dataset used for model evaluation. Then, Section 4 gives the specific experiment results with discussion. Finally, last part gives the summary and prospect.

Section snippets

Proposed methodology

In this work, a GDF-Net network, is proposed for automatic and accurate fundus vessel segmentation. This section provides details of the entire model and each network block.

Experiments and analyze

This section will give the special experimental results and analysis. Firstly, the experimental datasets and evaluation indicators will be given. Secondly, the detailed network configuration is presented for model training. Thirdly, the detailed experiment results and analysis on each dataset is given to demonstrate the robustness and generality of GDF-Net.

Conclusions

A novel symmetric segmentation model GDF-Net is put forward for retinal fundus image segmentation, which is composed by two symmetrical segmentation networks (global segmentation network branch and detail enhancement network branch) and an attention-based fusion network branch. Combined with qualitative analysis and quantitative analysis, extensive experiments have demonstrated the superiority of GDF-Net. Main work is summarized as:

(1) A novel GDF-Net network is put forward for end-to-end

CRediT authorship contribution statement

Jianyong Li: Methodology, Validation, Writing – review & editing. Ge Gao: Software, Writing – original draft. Lei Yang: Supervision, Resources, Visualization. Yanhong Liu: Conceptualization, Methodology.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported by the National Key Research & Development Project of China (2020YFB1313701) and the National Natural Science Foundation of China (No. 62003309).

References (57)

  • SoomroT.A. et al.

    Deep learning models for retinal blood vessels segmentation: A review

    IEEE Access

    (2019)
  • ChaudhuriS. et al.

    Detection of blood vessels in retinal images using two-dimensional matched filters

    IEEE Trans. Med. Imaging

    (1989)
  • NorouziA. et al.

    Medical image segmentation methods, algorithms, and applications

    IETE Tech. Rev.

    (2014)
  • SharmaN. et al.

    Automated medical image segmentation techniques

    J. Med. Phys.

    (2010)
  • KirbasC. et al.

    Vessel extraction techniques and algorithms: A survey

  • ToledoR. et al.

    Eigensnakes for vessel segmentation in angiography

  • J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, in: Proceedings of the IEEE...
  • BadrinarayananV. et al.

    Segnet: A deep convolutional encoder-decoder architecture for image segmentation

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2017)
  • ChenL.-C. et al.

    Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2017)
  • RonnebergerO. et al.

    U-Net: Convolutional networks for biomedical image segmentation

  • GuoC. et al.

    SA-UNet: Spatial attention U-Net for retinal vessel segmentation

  • SarhanA. et al.

    Transfer learning through weighted loss function and group normalization for vessel segmentation from retinal images

  • FuQ. et al.

    MSCNN-AM: A multi-scale convolutional neural network with attention mechanisms for retinal vessel segmentation

    IEEE Access

    (2020)
  • GuoX. et al.

    Retinal vessel segmentation combined with generative adversarial networks and dense U-Net

    IEEE Access

    (2020)
  • T. Laibacher, T. Weyde, S. Jalali, M2U-Net: Effective and efficient retinal vessel segmentation for real-world...
  • WangD. et al.

    Frnet: An end-to-end feature refinement neural network for medical image segmentation

    Vis. Comput.

    (2021)
  • YanZ. et al.

    Joint segment-level and pixel-wise losses for deep learning based retinal vessel segmentation

    IEEE Trans. Biomed. Eng.

    (2018)
  • MishraS. et al.

    A data-aware deep supervised method for retinal vessel segmentation

  • Cited by (0)

    View full text