C-Net: Cascaded convolutional neural network with global guidance and refinement residuals for breast ultrasound images segmentation

https://doi.org/10.1016/j.cmpb.2022.107086Get rights and content

Highlights

  • First, we developed a novel cascaded convolutional neural network to segment the lesion from breast ultrasound images.

  • Second, a bidirectional attention guidance network was designed to capture the context between global (low-level) and local (high-level) features.

  • Third, we introduced a refinement residual network to obtain the more complete lesion mask.

  • Moreover, the experimental results demonstrate that our method achieves the best overall performance on breast and renal ultrasound images segmentation.

Abstract

Background and objective

Breast lesions segmentation is an important step of computer-aided diagnosis system. However, speckle noise, heterogeneous structure, and similar intensity distributions bring challenges for breast lesion segmentation.

Methods

In this paper, we presented a novel cascaded convolutional neural network integrating U-net, bidirectional attention guidance network (BAGNet) and refinement residual network (RFNet) for the lesion segmentation in breast ultrasound images. Specifically, we first use U-net to generate a set of saliency maps containing low-level and high-level image structures. Then, the bidirectional attention guidance network is used to capture the context between global (low-level) and local (high-level) features from the saliency map. The introduction of the global feature map can reduce the interference of surrounding tissue on the lesion regions. Furthermore, we developed a refinement residual network based on the core architecture of U-net to learn the difference between rough saliency feature maps and ground-truth masks. The learning of residuals can assist us to obtain a more complete lesion mask.

Results

To evaluate the segmentation performance of the network, we compared with several state-of-the-art segmentation methods on the public breast ultrasound dataset (BUSIS) using six commonly used evaluation metrics. Our method achieves the highest scores on six metrics. Furthermore, p-values indicate significant differences between our method and the comparative methods.

Conclusions

Experimental results show that our method achieves the most competitive segmentation results. In addition, we apply the network on renal ultrasound images segmentation. In general, our method has good adaptability and robustness on ultrasound image segmentation.

Introduction

Breast cancer is a terrible disease that seriously threatens the health of women around the world. Due to the characteristics of asymptomatic, concealed, and multiple pathogenic factors in the early stage of breast cancer, regular screening is essential for the prevention and diagnosis of breast cancer. Breast ultrasound (BUS) imaging is a widely used screening method due to its advantages of painless, high sensitivity, noninvasive, and low cost [1]. BUS image segmentation can help characterize tissues and improve diagnosis, and it is an important part of BUS computer-aided diagnosis (CAD) systems [2,3]. However, the disturbance of various factors (such as heterostructure, blurred boundaries and variable shapes) makes accurate BUS image segmentation a challenging task, see Fig. 1 for more details.

Many segmentation methods have been proposed to segment the breast lesion from ultrasound images [4]. These segmentation algorithms can be classified into three types: manual, semi-automatic and automatic according to the degree of human intervention in the segmentation process [5]. Previously, the contours of breast lesions were mainly manually annotated by radiologists. Manually segmenting lesions from ultrasound images is time-consuming and error-prone [6]. To alleviate the challenge of manual segmentation, many semi-automatic methods have been developed to segment BUS images [5]. These semi-automatic segmentation methods not only reduce the variance of manual segmentation of breast lesions, but also further improve work efficiency. However, Yin et al. [7] pointed out that many semi-automatic methods rely on hand-crafted features. Therefore, it is very meaningful to segment the lesion automatically and reliably from BUS images. Recently, convolutional neural networks (CNNs) have been widely used in medical image segmentation [8], [9], [10], [11]. Among them, FCNN [12] and U-net [8] are typical CNN models, which have received extensive attention in BUS images segmentation based on their core architectures [13], [14], [15]. However, the surrounding tissue similar to the lesion affected the segmentation performance of the network, as shown in Fig. 1. To better utilize the contextual information of images, many methods introduce dilated convolution and pooling operations to obtain larger receptive fields [16], [17], [18]. However, these operations cannot capture contextual information from the global view and only consider dependencies on the spatial domain. Recently, capturing long-range and global dependency information has been shown to improve segmentation accuracy [19,20]. Although these methods improve the segmentation accuracy of lesion regions, learning limited long-range and global features in deeper convolutional layers will affect the performance of the segmentation network [21]. How to fully exploit the global and long-range dependencies to improve the segmentation accuracy of medical images remains a challenging task.

To address the above challenges, a novel cascaded convolutional neural network integrating U-net, bidirectional attention guidance network (BAGNet) and refinement residual network (RFNet) is developed to segment breast lesions, as shown in Fig. 2. The designed BAGNet is used to capture the context between global (low-level) and local (high-level) features. The developed RFNet is used to learn the difference between rough saliency feature maps and ground-truth masks. Residual learning of RFNet can assist us to obtain a more complete lesion mask. Our main contributions are as follows:

First, we developed a novel bidirectional attention-guided network (BAGNet) and used it to construct a cascaded convolutional neural network (denote as: C-Net) to segment breast lesions from the ultrasound image.

Second, we evaluated the segmentation network on the public BUS dataset using six evaluation metrics, and the experimental result demonstrate that our method outperforms state-of-the-art segmentation methods on BUS images segmentation.

Furthermore, our method is applied to renal ultrasound images segmentation and achieves great competitiveness compared with state-of-the-art segmentation methods.

Section snippets

Traditional methods for BUS image segmentation

Conventional methods for BUS image segmentation mainly include region-based methods, graph-based approaches and deformable models [2,5]. Xian et al. [22] proposed a fully automatic BUS images segmentation method for performing accurate lesion segmentation. Xiao et al. [23] developed a method to segment ultrasound image tissue by combining maximum a posteriori (MAP) and Markov random field (MRF) and applied it on BUS images. The segmentation results of this convex optimization method on breast

Materials and method

In this section, we first describe the segmentation network architecture of breast lesions. Then, the design details of the three key components of the network are presented. Finally, the loss function and experimental parameters for network training is introduced.

Ablation study

In this section, we show the effectiveness of the principal components of our network, i.e., bidirectional attention guidance network (BAGNet) and refinement residual network (RFNet). The ablation experiments are mainly conducted on the public BUSIS dataset.

The baseline network is U-net (i.e., the first row of Table 1), which includes four down-sampling operations and four up-sampling operations. Table 1 illustrates the results of the architecture ablation study. As we can see, our method

Discussion

The generalization ability of the network is further validated on our collected renal ultrasound dataset. The dataset contains 300 clinical renal ultrasound images from 300 patients in the Fourth Medical Center of the PLA General Hospital and the Civil Aviation General Hospital. These images are from two ultrasound devices (Philips and Esaote), with 150 images for each type of device. The pixel size of all images ranging from 0.12 to 0.32 mm. The renal labels used for network training and

Conclusion

To improve the segmentation accuracy of breast lesions, this paper presented a novel cascaded convolutional neural network (C-Net) to segment lesions from BUS images. The network mainly composed of U-net, bidirectional attention guidance network (BAGNet) and refinement residual network (RFNet). The design of the bidirectional attention guidance network captures the context between global (low-level) and local (high-level) features from the saliency map, which can reduce the interference of

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grant Nos.: U1913207, 51875394).

Reference (40)

  • E.H. Houssein et al.

    Deep and machine learning techniques for medical imaging-based breast cancer: a comprehensive review

    Expert Syst. Appl.

    (2021)
  • B. Lei et al.

    Segmentation of breast anatomy for automated whole breast ultrasound images with boundary regularized convolutional encoder–decoder network

    Neurocomputing

    (2018)
  • W.K. Moon et al.

    Computer-aided diagnosis of breast ultrasound images using ensemble learning from convolutional neural networks

    Comput. Methods Progr. Biomed.

    (2020)
  • W. Al-Dhabyani et al.

    Dataset of breast ultrasound images

    Data Brief

    (2020)
  • G. Chen et al.

    A novel convolutional neural network for kidney ultrasound image segmentation

    Comput. Methods Progr. Biomed.

    (2022)
  • J.A. Noble et al.

    Ultrasound image segmentation: a survey

    IEEE Trans. Med. Imaging

    (2006)
  • Q. Huang et al.

    Breast ultrasound image segmentation: a survey

    Int. J. Comput. Assist. Radiol. Surg.

    (2017)
  • O. Ronneberger et al.

    U-net: convolutional networks for biomedical image segmentation

  • Z. Zhou et al.

    UNet++: redesigning skip connections to exploit multiscale features in image segmentation

    IEEE Trans. Med. Imaging

    (2020)
  • H. Huang, L. Lin, R. Tong, H. Hu, J.B.T.-I. 2020-2020 I.I.C. on A. Wu Speech and Signal Processing (ICASSP), UNet 3+: a...
  • Cited by (25)

    • Active learning for left ventricle segmentation in echocardiography

      2024, Computer Methods and Programs in Biomedicine
    View all citing articles on Scopus
    View full text