Loading [a11y]/accessibility-menu.js
Convolutional Neural Network for Automated Mass Segmentation in Mammography | IEEE Conference Publication | IEEE Xplore

Convolutional Neural Network for Automated Mass Segmentation in Mammography


Abstract:

Automatic segmentation and localization of lesions in mammogram (MG) images are challenging problems even with employing advanced methods such as deep learning (DL) metho...Show More

Abstract:

Automatic segmentation and localization of lesions in mammogram (MG) images are challenging problems even with employing advanced methods such as deep learning (DL) methods [1]-[3]. To address these challenges, we propose to use a U-Net approach to automatically detect and segment lesions in MG images. U-Net [4] is an end-to-end convolutional neural network (CNN) based model that has achieved remarkable results in segmenting bio-medical images [5]. We modified the architecture of the U-Net model to maximize its precision such as using batch normalization, adding dropout, and data augmentations. The proposed U-Net model predicts a pixel-wise segmentation map of an input full MG image in an efficient way due to its architecture. These pixel-wise segmentation maps help radiologists in differentiating benign and malignant lesions depend on the lesion shapes. The main challenge that most DL methods face in mammography is the need for large annotated training data-sets. To train such DL networks without over-fitting, these networks need thousands or millions of training MG images [1], [3], [5]. In contrast, U-Net is capable of learning from a relatively small training data-set compared to other DL methods [4]. We used publicly available databases, (CBIS-DDSM, BCDR-01, and INbreast), and MG images from the University of Connecticut Health Center (UCHC) to train the proposed U-Net model [3]. The proposed U-Net method is trained on MG images that have mass lesions of different sizes, shapes, margins, and intensity variation around mass boundaries. All the training MG images containing suspicious areas are accompanied by associated pixel-level ground truth maps (GTMs) which indicate the background and breast lesion labels for each pixel. A total of 2066 MG images and their corresponding segmentation GTMs are used to train the proposed U-Net model. Moreover, we applied the adaptive median filter (AMF) and the contrast limited adaptive histogram equalization (CLAHE) filter to t...
Date of Conference: 18-20 October 2018
Date Added to IEEE Xplore: 22 November 2018
ISBN Information:

ISSN Information:

Conference Location: Las Vegas, NV, USA

Contact IEEE to Subscribe

References

References is not available for this document.