Computer‐aided diagnosis of breast ultrasound images using ensemble learning from convolutional neural networks

https://doi.org/10.1016/j.cmpb.2020.105361Get rights and content

Highlights

  • A computer-aided diagnosis (CAD) system was proposed to diagnose breast cancer in ultrasound images.

  • In this study, we propose a CAD system for tumor diagnosis using an image fusion method combined with different image content representations and ensemble different CNN architectures on US images.

  • The results of our CAD system in the SNUH dataset show that the accuracy, sensitivity, specificity, precision, F1 score, and the AUC of the proposed method were 91.10%, 85.14%, 95.77%, 94.03%, 89.36%, and 0.9697, respectively. The results of our CAD system in the open dataset (BUSI) show that the accuracy, sensitivity, specificity, precision, F1 score, and the AUC of the proposed method were 94.62%, 92.31%, 95.60%, 90%, 91.14%, and 0.9711, respectively.

Abstract

Breast ultrasound and computer aided diagnosis (CAD) has been used to classify tumors into benignancy or malignancy. However, conventional CAD software has some problems (such as handcrafted features are hard to design; conventional CAD systems are difficult to confirm overfitting problems, etc.). In our study, we propose a CAD system for tumor diagnosis using an image fusion method combined with different image content representations and ensemble different CNN architectures on US images. The CNN-based method proposed in this study includes VGGNet, ResNet, and DenseNet. In our private dataset, there was a total of 1687 tumors that including 953 benign and 734 malignant tumors. The accuracy, sensitivity, specificity, precision, F1 score and the AUC of the proposed method were 91.10%, 85.14%, 95.77%, 94.03%, 89.36%, and 0.9697 respectively. In the open dataset (BUSI), there was a total of 697 tumors that including 437 benign lesions, 210 malignant tumors, and 133 normal images. The accuracy, sensitivity, specificity, precision, F1 score, and the AUC of the proposed method were 94.62%, 92.31%, 95.60%, 90%, 91.14%, and 0.9711. In conclusion, the results indicated different image content representations that affect the prediction performance of the CAD system, more image information improves the prediction performance, and the tumor shape feature can improve the diagnostic effect.

Introduction

Ultrasound (US) is a useful way for the detection and diagnosis of breast cancer [1] because they are non-invasive, non-radioactive, real-time imaging, and high image resolution. However, to read US images requires well-trained and experienced radiologists. Even a well-trained expert might have a high inter-observer variation rate on tumor diagnosis [2]. Hence, computer-aided diagnosis (CAD) could be used to assist radiologists in breast cancer classification and detection [3], [4], [5], [6]. Recently, several studies [7], [8], [9], [10] have discussed the automatic breast cancer diagnosis method to classify benign and malignant tumor in US images.

Convolutional neural network (CNN) approaches have been proven to be very effective in a wide range of computer vision applications [11], [12], [13], [14]. In addition, CNN can recognize visual patterns directly from pixel images with minimal preprocessing and automate the whole feature extraction process. Furthermore, CNN has been employed broadly in medical image analysis, such as segmentation [15], classification [16], and detection [17]. In recent years, the usages of CNN models in ultrasound of breast cancers are shown significant development. Byra et al. [18] proposed a color conversion method that transfers the grayscale ultrasound images to 3-channel (RGB) images, which enhanced the classification performance. Yap et al. [19] proposed an end-to-end deep learning model in automated breast ultrasound lesions recognition; they are the first to implement semantic segmentation on BUS images and compared the performance between different CNN models. Yap et al. [20] proposed an automatic detection system of breast ultrasound lesions using CNN models, which compared three different CNN models of CAD systems that reduced the operator-dependent problem. Even if the CNN method was widely used in medical image fields for segmentation and diagnosis, but we want to understand the impact of different image content descriptions and CNN architectures on the various US diagnostic system.

In our study, we propose a CAD system for tumor diagnosis using an image fusion method combined with different image content representations and ensemble different CNN architectures on US images. First, we manually extract the region of interest (ROI), which covers the whole tumor and the ROI boundary close to the tumor margin. Then, the expert manually extracts the tumor region and the tumor shape image (TSI). In addition, we employed an image fusion method to enhance the diagnostic performance of our CAD system. Finally, we employed the ensemble method to combine multiple CNN results.

Section snippets

Material

In this study, we used two datasets of breast ultrasound images: Seoul National University Hospital (SNUH, Korean) collected the private dataset (dataset SNUH); the public dataset (dataset BUSI [21]) was collected by Baheya Hospital for Early Detection & Treatment of Women's Cancer (Cairo, Egypt) (https://doi.org/10.1016/j.dib.2019.104863).

Method

In our study, we proposed a CAD system for tumor diagnosis by using different CNN architectures with the ensemble method on US images. Fig. 2 shows the flow chart of our CAD system.

Experiment results

In this section, we compared the diagnostic performance of all CNN architectures, including: VGG-Like, VGG-16, ResNet-18, ResNet-50, ResNet-101, DenseNet-40, DenseNet-121, and DenseNet-161. Furthermore, we list the base machine which achieved the best performance on the test set and compares performance between using different ensemble methods.

In statistical analysis, six quantitative indicators were used to evaluate the diagnostic performance: accuracy (ACC), sensitivity (SEN), specificity

Conclusion and discussion

In recent years, several studies [7], [8], [9], [10] had been published breast cancer diagnosis methods to classify benign and malignant tumors in US images. In recent researches, several studies [11,36,[47], [48], [49]] had been published to diagnosis breast cancer based on a convolution neural network. Many studies [11,36,48] enhance the diagnosis performance by extending the model capability through designed different CNN architectures or different machine learning methods. Furthermore, some

Declaration of Competing Interest

The authors declare that they have no financial and personal relationships with other people or organizations that could inappropriately influence their work.

Acknowledgment

The authors thank the Ministry of Science and Technology of Taiwan (MOST 107-2634-F-002-013,MOST 108-2634-F-002-010, and MOST 109-2634-F-002-026) for financial support.

References (51)

  • T. Fawcett

    An introduction to ROC analysis

    Pattern Recognit. Lett.

    (2006)
  • Q. Zhang et al.

    Deep learning based classification of breast tumors with shear-wave elastography

    Ultrasonics

    (2016)
  • T. Tan et al.

    Computer-aided lesion diagnosis in automated 3-D breast ultrasound using coronal spiculation

    IEEE Trans. Med Imaging

    (2012)
  • M. Samulski et al.

    Using computer-aided detection in mammography as a decision support

    Eur Radiol

    (2010)
  • L.A. Meinel et al.

    Breast MRI lesion classification: improved performance of human readers with a backpropagation neural network computer‐aided diagnosis (CAD) system

    J. Magnetic Resonance Imaging

    (2007)
  • B. Sahiner et al.

    Malignant and benign breast masses on 3D US volumetric images: effect of computer-aided diagnosis on radiologist accuracy

    Radiology

    (2007)
  • H.-P. Chan et al.

    Improvement of radiologists' characterization of mammographic masses by using computer-aided diagnosis: an ROC study

    Radiology

    (1999)
  • H.-W. Lee et al.

    Breast tumor classification of ultrasound images using wavelet-based channel energy and imageJ

    IEEE J. Sel. Top. Signal Process

    (2009)
  • P.-H. Tsui et al.

    Classification of benign and malignant breast tumors by 2-D analysis based on contour description and scatterer characterization

    IEEE Trans. Med. Imaging

    (2010)
  • F. Hu et al.

    Transferring deep convolutional neural networks for the scene classification of high-resolution remote sensing imagery

    Remote Sens. (Basel)

    (2015)
  • R. Girshick et al.

    Region-based convolutional networks for accurate object detection and segmentation

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2016)
  • R. Girshick et al.

    Rich feature hierarchies for accurate object detection and semantic segmentation

    Proc. IEEE Conf. Comput. Vis.Pattern Recognit.

    (2014)
  • A. Krizhevsky et al.

    Imagenet classification with deep convolutional neural networks

    Adv. Neural Inf. Processing Syst.

    (2012)
  • H.R. Roth et al.

    Improving computer-aided detection using convolutional neural networks and random view aggregation

    IEEE Trans. Med. Imaging

    (2016)
  • M. Byra et al.

    Breast mass classification in sonography with transfer learning using a deep convolutional neural network and color conversion

    Med. Phys.

    (2019)
  • Cited by (216)

    View all citing articles on Scopus
    View full text