Elsevier

Pattern Recognition Letters

Volume 129, January 2020, Pages 293-303
Pattern Recognition Letters

Developed Newton-Raphson based deep features selection framework for skin lesion recognition

https://doi.org/10.1016/j.patrec.2019.11.034Get rights and content

Highlights

Abstract

Melanoma is the fatal form of skin cancer; however, its diagnosis at the primary stages significantly reduces the mortality rate. These days, the increasing numbers of skin cancer patients have boosted the requirement for a care decision support system - capable of detecting the lesions with high accuracy. In this work, a method is proposed for skin cancer localization and recognition by implementing a novel combination of a deep learning model and iteration-controlled Newton-Raphson (IcNR) based feature selection method. The proposed framework follows three primary steps - lesion localization through faster region based convolutional neural network (RCNN), deep feature extraction, and feature selection by IcNR approach. In the localization step, a new contrast stretching approach based on bee colony method (ABC) is being followed. The enhanced images along with their ground truths are later plugged into Fast-RCNN to get segmented images. A pre-trained model, DenseNet201, is utilized to extract deep features via transfer learning, which are later subjected to selection step using proposed IcNR approach. The selected most discriminant features are finally utilized for classification using multilayered feed forward neural networks. Tests are performed on ISBI2016 and ISBI2017 datasets to achieving an accuracy of 94.5% and 93.4%, respectively. Simulation results reveal that the proposed technique outperforms existing methods with greater accuracy, and time.

Introduction

In skin cancer, melanoma is the deadliest type – responsible for the death of large number of people worldwide [1], [2]. Skin cancer can be treatable if diagnosed at the early stages, otherwise, consequences will be severe [3]. At the early stage, melanoma starts in the melanocyte cells, which seems like a mole having black or brown color [4], [5]. In the year 2017, reported skin cancer cases, in United States (US) only, are 95,360 (57,140 men and 38,220 women), in which melanoma cases are 87,110 (52,170 men and 34,940 women). The estimated deaths occurred in the USA since 2017 are 13,590 (9250 men and 4340 women) [6], [7]. In the year 2018, an estimated 99,550 (60,350 men and 39,200 women) cases are reported. From those melanoma cases are 91,270 including 55,150 men and 36,120 women. The death cases, on the other hand, in 2018 are 13,460 including 9070 men and 4390 women.

In 2019 only, stats show 104,350 cases, including 62,320 men and 42, 030 women. The number of melanoma cases are 96,480, including 57,220 men and 39,260 women. The number of melanoma death cases during 2019 in USA are 7320 [8]. Fig. 1 provides cancer statistics in brief for the year 2015 to 2019.

A conventional method for skin lesion detection is through visual inspection, which is quite a challenging task – clearly depends upon the expert. A dermatologist mostly uses the screening methods such as 7-point checklist [9], ABCDE rule [10], and several other advanced techniques like optical imaging system and light, etc. [11] for the detection of skin lesion. These methods perform well but are time-consuming and are also not free from human error. Due to the recent advancements in the field of computer vision (CV), several computerized systems are utilized in clinics which play helps to doctors for diagnosis at early stage. Most of the existing methods incorporates four primary stages for skin lesion detection from contrast stretching to classification [12], [13].

Preprocessing step is very important for the removal of noises such as hair, bubbles, etc., and also plays a vital role in an accurate segmentation [14]. Feature extraction from the segmented image is a crucial step, as good features lead to an accurate classification and vice versa [15], [16]. Lately, with the advent of deep learning methods [17], [18], [19], there is an increasing trend to utilize them in medical domain [20]. By embedding the concept of transfer learning, convolutional neural network (CNN) models [21], [22] - trained on the large image datasets are retrained on the skin datasets. Feature selection is an important research in the area of machine learning and CV [23], [24]. In the medical imaging, the extraction of features from raw images generates various patterns information and few of them are not essential for classification task [25]. The irrelevant information misguides the selected classifiers and reduces the overall performance.

Inspired from the comparative work by Fernandes et al. [26], in which early skin lesion is detected based on two state-of-the-art techniques, color constancy, and skin lesion analysis. Authors performed a detailed analysis to conclude that color constancy approach is a better choice for skin lesion detection. Additionally, they also concluded that early detection of skin lesion is quite expedient for the treatment of melanoma.

In this work, implemented a DenseNet pre-trained CNN model [27] for deep feature extraction and later best most discriminant features are selected by employing a Newton Raphson (NR) method. Our major contributions are- (a) Artificial Bee Colony (ABC) based an efficient contrast stretching method is proposed for an accurate segmentation; (b) Faster RCNN is implemented for lesion detection – utilizing ground truth pixels’ information; (c) An entropy based activation function for deep features extraction is implemented, and (d) A Newton Raphson (IcNR) computational method is implemented for the most discriminant features selection.

The remaining manuscript is ordered as follows: Related work is described in Section 2. Proposed DLNR method presented in Section 3. Results and comparison are discussed in Section 4. Finally, Section 5 concludes the overall manuscript.

Section snippets

Related Work

An automatic mechanism for the recognition of skin lesion is an arduous task due to a set of factors including low contrast, irregularity, presence of several artifacts like hairs and bubbles, etc. Manual inspection of a skin lesion is dependent on a qualified specialist, which can't be available whole time, therefore, machine learning based methods are proposed by a pool of researchers working in this domain. Codella et al. [28] presented a hybrid method for lesion recognition, ensemble deep

Proposed methodology

The proposed deep learning and NR (DLNR) based skin lesion recognition system consists of the following primary steps, Fig. 2, where contrast stretching is performed to visually improve the lesion area. Later, Faster RCNN (F-RCNN) is applied on contrast stretched images for lesion boundary localization. After localization of lesion boundary, extract the deep features by employing pre-trained CNN model name DenseNet. Transfer learning based optimized skin lesion features is extracted that later

Datasets description

The proposed skin lesion recognition method is tested on two datasets, ISBI 2016 [46] and ISBI 2017 [47]. The ISBI 2017 dermoscopy dataset includes total of 2750 RGB images of different resolutions. From total dermoscopy images, 517 images are malignant and 2223 are benign. The ISBI 2016 dermoscopy dataset includes total of 1279 images of different resolutions having 273 malignant and 1006 benign.

Simulation procedure

In the simulation procedure, images are divided into training and testing images. Two different

Conclusion

A new automated system is proposed for skin lesion localization and recognition – utilizing the concept of deep learning and IcNR based feature selection. The proposed IcNr selection method is evaluated on two freely available datasets- ISBI2016 and ISBI2017 to achieving an average accuracy of 94.5% and 93.4%, respectively. From the results, it is concluded that the contrast stretching step increases the segmentation accuracy by enhancing the lesion area compared to the background.

Declaration of Competing Interest

On the behalf of corresponding author, all authors declare that they have no conflict of interest.

References (56)

  • M.A. Khan et al.

    Construction of saliency map and hybrid set of features for efficient segmentation and classification of skin lesion

    Microsc. Res. Tech.

    (2019)
  • M. Nasir et al.

    An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach

    Microsc. Res. Tech.

    (2018)
  • N.C. Codella et al.

    Skin lesion analysis toward melanoma detection: a challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC)

  • M.E. Celebi et al.

    Dermoscopy image analysis: overview and future directions

    IEEE J. Biomed. Health Inform.

    (2019)
  • J. Kawahara et al.

    Seven-Point checklist and skin lesion classification using multitask multimodal neural nets

    IEEE J. Biomed. Health Inform.

    (2019)
  • S. Goldsmith et al.

    A series of melanomas smaller than 4mm and implications for the ABCDE rule

    J. Eur. Acad. Dermatol. Venereol.

    (2007)
  • Z. Turani et al.

    Optical radiomic signatures derived from optical coherence tomography images to improve identification of melanoma

    Cancer Res.

    (2019)
  • A. Liaqat et al.

    Automated ulcer and bleeding classification from WCE images using multiple features fusion and selection

    J. Mech. Med. Biol.

    (2018)
  • S. Naqi et al.

    Lung nodule detection using polygon approximation and hybrid features from CT images

    Curr. Med. Imaging Rev.

    (2018)
  • F. Bokhari et al.

    Fundus image segmentation and feature extraction for the detection of glaucoma: a new approach

    Curr. Med. Imaging Rev.

    (2018)
  • A. Mahbod et al.

    Skin lesion classification using hybrid deep neural networks

  • M.A. Khan et al.

    Lungs cancer classification from CT images: an integrated design of contrast based classical features fusion and selection

    Pattern Recognit. Lett.

    (2019)
  • M. Rashid et al.

    Object detection and classification: a joint selection and fusion strategy of deep convolutional neural network and sift point features

    Multimed. Tools Appl.

    (2018)
  • M. Sharif et al.

    Deep CNN and geometric features-based gastrointestinal tract diseases detection and classification from wireless capsule endoscopy images

    J. Exp. Theor. Artif. Intell.

    (2019)
  • S. Mukherjee et al.

    Malignant melanoma classification using cross-platform dataset with deep learning CNN architecture

    Recent Trends in Signal and Image Processing

    (2019)
  • J. Amin et al.

    Brain tumor classification based on dwt fusion of MRI sequences using convolutional neural network

    Pattern Recognit. Lett.

    (2019)
  • M. Sharif et al.

    A framework for offline signature verification system: Best features selection approach

    Pattern Recognit. Lett.

    (2018)
  • M.A. Khan et al.

    Classification of gastrointestinal diseases of stomach from wce using improved saliency-based method and discriminant features selection

    Multimed. Tools Appl.

    (2019)
  • Cited by (0)

    View full text