Processing math: 0%
Skip to content
BY 4.0 license Open Access Published by De Gruyter June 19, 2024

Deep and hand-crafted features based on Weierstrass elliptic function for MRI brain tumor classification

  • Ibtisam Aldawish and Hamid A. Jalab EMAIL logo

Abstract

Advances in medical imaging and artificial intelligence have led to improvements in diagnosis and non-invasive patient examination accuracy. The use of the fundamental method for Magnetic resonance imaging (MRI) brain scans as a screening tool has increased in recent years. Numerous studies have proposed a variety of feature extraction methods to classify the abnormal growths in MRI scans. Recently, the MRI texture analysis and the use of deep features have resulted in remarkable performance improvements in the classification and diagnosis of challenging pathologies, like brain tumors. This study proposes employing a handcrafted model based on Weierstrass elliptic function (WEF) and deep feature based on DenseNet-201 to classify brain tumors in MRI images. By calculating the energy of each individual pixel, the Weierstrass coefficients of the WEF are used to capture high frequency image details of the brain image. The WEF mode works to extract the nonlinear patterns in MRI images based on the probability of each pixel. While the dense connectivity of DenseNet-201’s architecture allows to learn features at multiple scales and abstraction levels. These features are passed to support vector machines classifier, which classifies the brain tumor. The results of classification accuracy achieved is 98.55% for combined features of WEF with trained DenseNet-201. Findings on the brain tumor segmentation dataset indicated that the proposed method performed better than alternative techniques for classifying brain tumors.

JEL Classification: 00A35

Terminology

AVG

average pooling layer

BRATS

brain tumor segmentation

CNNs

convolutional neural networks

DB

dense blocks

DL

deep learning

DWT

discrete wavelet transform

FC

fully connected layer

LSTM

long short-term memory

MP

max pooling layer

MRI

magnetic resonance imaging

PCA

principal component analysis

SVM

support vector machines

WEF

Weierstrass elliptic function

1 Introduction

The brain is a crucial human body part that controls the functions of all other organs and assists in decision-making. Brain tumors are one of the deadliest diseases in the world, affecting people physically, intellectually, and psychologically. Most of these diseases begin in other parts of the body and subsequently progress to the brain. The ability to correctly diagnose and treat brain tumors is made possible by the classification of brain tumor images. Brain tumors are a serious medical condition that can cause a wide range of symptoms and, if untreated, can be fatal. The best treatment strategy and increased likelihood of a patient’s recovery depend on an early and accurate diagnosis [1]. Uncontrolled proliferation leads to the formation of malignant cells, which are pathologic cells. Brain tumors are made up of heterogeneous cells that proliferate at an uncontrollable rate and differ greatly in terms of morphological traits and genetic variants. Some of the common types of brain tumors that can be classified using imaging include:

  • Gliomas: Are tumors that arise from the brain’s supporting glial cells.

  • Meningiomas: These tumors grow from the meninges, the membranes that cover the brain and spinal cord.

  • Pituitary tumors: The pituitary gland, which is found at the base of the brain, is where these tumors grow.

  • Metastatic tumors: These tumors are secondaries that have metastasized from other parts of the body to the brain.

  • Medulloblastomas: These are rapidly expanding tumors that most frequently affect children and form in the cerebellum, the region of the brain responsible for balance and coordination.

Early identification of a brain tumor is essential for lowering the patient’s mortality rate. However, due to tumor size, form, location, and kind, it is a difficult to process [2]. The result of a magnetic resonance imaging (MRI) examination is a collection of tissue-specific images with various contrast visualizations. These pulse sequences offer useful anatomical details that assist clinicians in precisely diagnosing the pathological conditions. The two types of MRI technologies are T1-weighted (T1-w) and T2-weighted (T2-w). Because they have a high resolution and fewer artifacts, T1-w images are used as anatomical references. T2-weighted (T2-w) images, on the other hand, are significant MRI sequences that are useful for identifying the limits of pathological structures [3,4]. Research on brain cancer has advanced significantly because of the accurate classification of brain tumors. Researchers can create new therapies that target pathways and mutations that fuel tumor growth with a better understanding of the genetic and molecular features of brain tumors. In addition, precise brain tumor classification enables medical professionals to customize a treatment plan based on the tumor. Early tumor identification is critical in the therapy procedure. The radiologist classifies the MR image as normal or abnormal using classification algorithms. Manual classification is a time-consuming task. Furthermore, they may obtain different results from different observers or different findings from the same observer when distinguishing tumor. As a result, automatic classification algorithms are required. Many algorithms based on image processing and machine learning have been developed by computer vision researchers [5,6]. Characterizing regions in images that are measured using texture analysis is the goal of any diagnostic imaging technique. In addition, texture analysis is thought to be more accurate than human visual examination of the brain in MR images.

The feature extraction methods are categorized as handcrafted (classical) [7] or deep learning (DL)-based [8]. Handcrafted techniques in the traditional approach include features such as texture features, and shape features. These features are then fed into a machine learning algorithm as input. The handcrafted feature extraction methods are based on using texture analysis to characterize tissues in brain images. Texture analysis is a quick and easy approach to get high-level data. Furthermore, texture analysis has been found to be more effective than human in distinguishing between tissues.

DL approaches extract features from raw images using multiple layers of DL [9,10]. Numerous layers of convolution, pooling, and activation functions are typically used during the feature extraction process in Convolutional neural networks (CNNs). Many techniques based on DL have been introduced in the literature, and few of them are described here. The benefit of handcrafted feature extraction combined with a CNN is that it enables the extraction of both significant, clinically relevant features. This can improve the accuracy and interpretability of the results. Medical science has made significant advances in recent years as a result of the machine learning and the DL [11]. Deep transfer learning employs an existing model that has already been shown to be effective rather than developing a new one [12]. Existing literature predominantly focuses on either handcrafted or DL-based methods in isolation. As mentioned, the handcrafted features capture domain-specific knowledge, while DL features discern complex, abstract patterns within the images. Therefore, the aim of this study is to use both the handcrafted and DL features that are extracted from brain MRI to provide additional diagnostic information for classifying brain as normal or abnormal (with tumor). The primary contribution of this study is the developmental model for extracting features from both Weierstrass elliptic function (WEF) and DenseNet-201 and combining them into a single feature set to achieve superior diagnostic accuracy and interpretability. High frequency image details of the brain image are captured using the texture features of the proposed WEF model. While the overall classification of MRI brain images has been greatly enhanced by the deep features of the DenseNet-201 model.

2 Related works

Radiologists can examine medical data with a second perspective thanks to the MRI classification. To achieve exact diagnosis, it is necessary to design an excellent diagnostic tool for image classification from MRI scans. The MRI brain images can be diagnosed using supervised techniques such as artificial neural networks, which classify images into one of two categories: normal and abnormal [13,14,15]. The term abnormal (pathological) brain MRI refers to the scans that do not show a healthy brain, whereas normal refers to the appearance and intensity of brain that are normal [16]. Several approaches for MRI brain tumor classification have been proposed.

Chaddad [17] proposed a unique approach for feature extraction utilizing T1 and T2 weighted MRI based on Gaussian mixture model features. The diseased region was identified utilizing multi-threshold segmentation with morphological MR image processing. The comparative investigation found that principal component analysis and wavelet-based features achieved the best results. This method, however, is dependent on the accuracy of the segmentation stage. Fayaz et al. [16] proposed an effective binary classification model for MRI images of the brain. The suggested model used a blended neural network for brain MRI classification and a discrete wavelet transform for feature extraction with statistical feature reduction. According to the results, the proposed approach produces good outcomes and detection accuracy. However, the approach used for feature selection was the reason for the good results.

In computer vision, the DL has become a popular family of machine learning algorithms that has been effectively applied to a variety of problems. Due to its hierarchical technique of learning high-level from input data, the DL employing the CNN is quickly increasing in neuro imaging. Several research used DL algorithms for MRI image classification, utilizing different CNN architectures. Khan et al. [18] proposed a CNN approach for classifying brain cancers on MRI images. Although the experiment was conducted on a limited dataset, the results reveal that the model accuracy was effective. In comparison to other pre-trained models, this methodology consumed less processing power and produced significantly superior accuracy outcomes. Rehman et al. [19] classified the brain tumors using three distinct pre-trained CNN models (GoogleNet, VGG16, and AlexNet). Most of the employed CNN representations are modeled by deep extraction of image features, which works well with images. Aziz et al. [9] proposed employing an array of optimum DL features to classify multimodal brain tumors. After pre-processing the image database, two pre-trained DL models are used to select the best feature from both deep models. The experiment was carried out using the BraTs2019 dataset and yielded accuracies of 87.8%. However, because most extracted features are irrelevant to the classification phase, feature selection is utilized to improve classification accuracy. Aamir et al. [20] developed an automated method for identifying brain cancers on MRI scans first, a pre-processing stage is employed to improve visual quality, followed by the application of two separate pre-trained DL models to extract strong characteristics from photos. Then, a cluster reveals the top tumor sites. The suggested method outperformed previous approaches with the classification accuracy. This method relies on clustering to segment the tumor region, which enhanced the classification accuracy. Wahlang et al. [15] proposed DL architectures for classifying brain MRI images as normal or abnormal, while gender and age are introduced as higher characteristics to improve classification accuracy and meaning. A DL CNN-based approach was used with other deep architectures such as ResNet, LeNet, and AlexNet. In comparison to support vector machines (SVM) and AlexNet, the overall accuracy attained is 88%. However, in this approach, the gender and age features were used to reduce the effect of irrelevant features, and high similarity between tumor types. In order to identify early brain tumors, Noreen et al. [21] developed a multi-level feature extraction method using two DenseNet-201 and Inception-v3. In contrast to the most advanced DL and machine learning-based techniques available today for classifying brain tumors, Inception-v3 and DenseNet-201 outperformed previous approaches with the classification accuracy. The lack of labeled data is one of the most critical challenges to DL in medical applications. As recent breakthroughs, DL models have demonstrated that the more data there are, the better the result. The primary goal of this work is to develop an automated method for identifying the brain as normal (without tumor) or abnormal (with tumor) using a combination of handcrafted and DL features from brain MRI. In addition to the DenseNet-201deep features, the handcrafted feature extraction using the mathematical model of WEF is what makes this study novel.

3 Materials and methods

Handcrafted techniques in the traditional approach include features such as texture features, and shape features. These features are then fed into a machine learning algorithm as input. The handcrafted feature extraction methods are based on using texture analysis to characterize tissues in brain images. Texture analysis is a quick and easy approach to get high-level data. Furthermore, texture analysis has been found to be more effective than human in distinguishing between tissues.

The existing literature predominantly focuses on either handcrafted or DL-based methods in isolation, while the brain sequences cannot be accurately classified as normal or abnormal brain tissue using just one method of MRI extracted features. In this case, the proposed study involves extracting features using two different methods: the new handcrafted feature extraction method based on proposed WEF and the DL feature extraction method. The proposed feature models are extracted using MATLAB 2023b. The number of extracted features recovered by one approach was insufficient to reliably identify the MRI sequences as normal or abnormal brain tissue. The proposed study includes three stages. The first stage uses a newly created, handcrafted feature extraction using the WEF model, while the deep feature extraction model based on DenseNet-201 is used in the second stage. The third step involves combining several feature extraction techniques to create a feature vector for the classification stage in which SVM is utilized. The flow process for MRI classification is illustrated in Figure 1.

Figure 1 
               The flow process for MRI classification [24].
Figure 1

The flow process for MRI classification [24].

3.1 Proposed feature extraction

The new handcrafted feature extraction based on WEF is developed in this study. Medical image classification greatly benefits from image texture, which is a crucial component in many medical image analyses [22,23]. The most important methods for extracting features are those based on polynomials, that can identify subtle variations in image intensity values. The proposed WEF is created to extract the texture from MRI images.

3.2 Method details

In this section, the pixel energy based on WEF, which is the exact solution of the complex differential equations of type complex Ginzburg–Landau equation, is defined. Finally, the formula for obtaining the image features is defined using the values of the solution’s Weierstrass coefficients.

3.2.1 Complex differential equations (CDEs)

A differential equation which incorporates complex-valued functions, and their derivatives is referred to as a CDE. Usually, it looks as follows:

(1) G(ζ,ζ,ζ

where ζ is a complex variable, ζ represents the first derivative of ζ in regard to an unknown variable (commonly written as t), ζ represents the second derivative, and so on through the nth derivative.

According to their characteristics, CDEs can be divided into distinct types, such as linear or nonlinear, ordinary or partial, and homogeneous or nonhomogeneous. Complex-valued functions that satisfied the equation for a specific domain are the solutions to CDEs. Similar methods to those used to solve real-valued differential equations, such as variable separation, integrating factors, power series solutions, and Laplace transforms, are frequently employed to solve CDEs. Nevertheless, the existence of fictitious and complex numbers in complex differential equations can lead to extra complications. It is significant to highlight that CDEs have uses in many disciplines, including physics, engineering, and mathematics, where the description of physical processes and the development of theoretical frameworks inherently involve the use of complex numbers.

3.2.2 Image processing by CDEs

CDEs are frequently used to express and evaluate a variety of image-related events in the area of image processing. Complex-valued functions, with real and imaginary portions that depict various facets of the image, are used in these equations. The complex Ginzburg–Landau equation (CGLE) is a typical example of a complex differential equation utilized in image processing. A partial differential equation known as the CGLE can be used to model complex-valued fields like the amplitudes of optical waves in some systems. It is frequently employed in the fields of texture analysis, image segmentation, and pattern creation research. The following provides the CGLE’s general form:

(2) I = I ( 1 + i κ ) + ( 1 + i γ ) 2 I ( 1 + i μ ) | I | 2 I .

At this juncture, I embodies the complex-valued field (image matrix), τ denotes time, ² denotes the Laplacian operator, |I|² indicates the squared magnitude of I, and κ, γ, and μ are parameters that control the undercurrents of the equation. The CGLE captures the interaction between linear diffusion, nonlinearity, and coupling in the development of complex patterns.

One can examine the creation of different patterns in images, such as spirals, waves, and horizontal stripes, by resolving the CGLE. Investigators are able to comprehend the fundamental mechanics of image formations by using it to analyze the stability and bifurcations of these patterns. A robust mathematical foundation for describing and interpreting intricate patterns and structures in image processing is provided by CDEs. They make it possible for experts to learn more about how images behave and to create cutting-edge methods for image creation, segmentation, and enhancement. The solution of the equation is given by using the WEF [25].

(3) ( ζ ) = 1 ζ 2 + n = 2 ρ n ζ 2 n 2 ,

where ζ is defined in the puncture unit disk. Moreover, the coefficients (ρ) are defined as follows:

(4) ρ 2 = σ 2 20 , ρ 3 = σ 3 28 , ρ 4 = 1 3 ρ 2 2 , , ρ n = 3 ( 2 n + 2 ) ( n 3 ) j = 2 n 2 ρ j ρ n j ,

where ( σ 2 , σ 3 ) = (0,1) for the equianharmonic case and ( σ 2 , σ 3 ) = (1,0) for the lemniscate case.

Proposition 2.1

The upper bound in the open unit disk of ζ 2 ( ζ ) is

(5) | ζ 2 ( ζ ) | 1 + n = 2 ρ n r n ,   | ζ | = r < 1 .

Moreover, if it is convex, then

(6) | ζ 2 ( ζ ) | 1 + n = 2 2 r n ,   | ζ | = r < 1 .

Proof

Let | ζ | = r < 1. Then,

| ζ 2 ( ζ ) | = 1 + n = 2 ρ n ζ 2 n ,   | ζ | < 1

1 + n = 2 ρ n | ζ 2 n | = 1 + n = 2 ρ n | ζ 2 n |

= 1 + n = 2 ρ n r 2 n

1 + n = 2 ρ n r n ,

where 0 < r < 1 is the probability of the pixel, and ρ denotes the Weierstrass coefficients.

For the second part, since the functional ζ 2 ( ζ ) is convex in the open unit disk, it satisfies the subordination inequality [26].

ζ 2 ( ζ ) 1 + ζ 1 ζ .

But, 1 + ζ 1 ζ = 1 + 2 ζ + 2 ζ 2 + .

Hence, ρ n 2 .

By Proposition 2.1, the measurable amount of an image pixel's energy is determined by its textural uniformity, image similarity, and degree of pixel pair Repetitions, which is given by:

(7) E n , m ( i , j ) = i = 1 n j = 1 m G ( i , j ) × r n × ρ n ,

where G(i,j) is the input image pixels, r denotes the pixel probability, and an experimental fix of 1.5 has been made for the value of the Weierstrass coefficients ρ. It is important to note that the domains of image feature extraction in image processing and computer vision frequently use ideas from convex optimization to address specific issues. The input image G(i,j) is first divided into non-overlapping blocks for the feature extraction. The non-overlapping blocks are set to size (16 × 16). For each block, compute the block feature using equation (7). Figure 2 shows the behavior of the differences between the two classes using a scatter plot (tumor and non-tumor).

Figure 2 
                     The distribution of the two classes of features. Non-Tumor (blue dots) and tumor (red dots).
Figure 2

The distribution of the two classes of features. Non-Tumor (blue dots) and tumor (red dots).

The texture feature of Weierstrass coefficients of the WEF are used to capture high frequency image details of the brain image. These extracted features are unique and can be used to detect brain tumor.

3.3 CNN models

There have been numerous attempts to classify patients’ brain images from MRI images using various techniques and image datasets, which inspired this study to investigate the CNN DL feature extraction method. The CNN-based learning algorithms are more appropriate for medical image classification. Numerous images are required for the CNN medical image training, which is difficult to accomplish in standard testing circumstances. Rather than beginning the learning process from scratch, transfer learning builds on past knowledge. The principal benefits of transfer learning include reduced training time and enhanced performance on small datasets. There are two ways to employ pre-trained CNN models traditionally: first, the ready-made pre-trained CNN models are used for feature extraction and training. Second, fine-tune pre-trained models in selected layers to achieve the desired results. The pre-trained DenseNet-201 has been used in this study to extract features from a dataset of brain tumors. Multiple dense blocks (DBs) with direct connections between layers are present in DenseNet-201. Every layer is connected to every other layer in the dense convolutional network (CONV). The vanishing-gradient problem is eliminated, feature propagation is improved, features are reused, and there is a significant decrease in the number of parameters, among other benefits. After several convolutional layers, it can be challenging to distinguish between some brain MRI slices because they share many characteristics. High-level neural networks suffer from accuracy declines due to vanishing gradients, which is why DenseNet was developed. Due to this, DenseNet appears to be a viable option for this study’s classification of brain tumors using MRI. One convolutional layer (CONV), one max pooling layer (MP), three transition layers (TL), one average pooling layer (AVG), one fully connected layer (FC), and one Softmax layer make up DenseNet121. There are four DB layers in total. The AVG layer accepts all the feature maps of the network to perform classification. There are four DBs with various numbers of convolutional layers in DenseNet-201 as illustrated in Figure 3 and in Table 1 [27].

Figure 3 
                  Framework of DenseNet-201.
Figure 3

Framework of DenseNet-201.

Table 1

DenseNet-201 architectures

Layers Size
Convolution (Conv) 112 × 112 7 × 7, stride 2
Pooling (MP) 56 × 56 3 × 3 Maxpool (MP), Stride 2
Dense block 1 (DB) 56 × 56 6 × [(1 × 1 conv), (3 × 3 conv)], 1 × 1 conv
Transitional layer 1 (TL) 56 × 56 2 × 2 average pool (AVGP), stride 2
Dense block 2 28 × 28 12 × ([1 × 1 conv], [3 × 3 conv]), 1 × 1 conv
Transitional layer 2 28 × 28 2 × 2 average pool, stride 2
Dense block 3 14 × 14 48 × ([1 × 1 conv), [3 × 3 conv]), 1 × 1 conv
Transitional layer 3 7 × 7 2 × 2 average pool, stride 2
Dense block 4 7 × 7 32 × ([1 × 1 conv], [3 × 3 conv]), 7 × 7 global average pool
Classification layer 1,000 Fully connected (FC), Softmax

During the feature extraction step, the transfer learning approach is used to extract the DL features from the pre-training network. The network’s last fully linked layer is used as a feature vector through feature extraction.

3.4 MRI dataset

The multimodal brain tumor image segmentation (BRATS) [24] standard brain tumor image dataset has been used in this study. The publicly available BRATS 2019 dataset contains multimodal MRI scans, including T1-weighted, T1-weighted contrast-enhanced, T2-weighted, and fluid attenuated inversion recovery (Flair) images. This study included 259 instances of HGG and 76 cases of lower grade glioma (LGG). Clinicians and qualified radiologists personally annotated all images. The T2-w images are utilized in this study because of their excellent tissue pathological sensitivity and ability to clearly display normalization of tumor boundaries. The dataset images are in MRI format, and four types of scans were obtained for each patient: T1, T1CE, T2, and Flair.

3.5 Performance metrics

The effectiveness of the extracted feature in classifying the MRI into normal and abnormal cases was measured using the SVM classifier. The proposed MRI image classification is assessed using the subsequent metrics: TN stands for “true negative,” FP for “false positive,” TP for “true positive,” and FN for “false negative.” The following classification metrics were applied:

(8) Accuracy = TP + TN TP + TN + FP + FN ,

(9) Sensitivity ( Recall ) = TP TP + FN ,

(10) Specificity = TN TN + FP ,

(11) Precision = TP TP + FP .

3.6 Classification

The SVM is a popular machine-learning algorithm used for classification. Finding a hyperplane that divides the data into distinct classes with the largest margin feasible is how the SVM classifier operates.

The process of nested five-fold cross-validation is used to assess the machine learning algorithm performance on a dataset by dividing the data into five equal-sized folds, four of which are used for training and one for testing. Each fold is tested once, and the remaining four folds are used for training, in this process, which is repeated five times. The nested five-fold cross-validation procedure reduces the possibility of overfitting to the training data, allowing for a more accurate estimate of the performance of the machine learning algorithm.

4 Results and discussion

In this study, MATLAB R2021b on Windows 10 with processor Intel(R) Core i7-7700HQ CPU @ 2.80 GHz 16 GB was used to produce the test results. The complete dataset has been split into three sections: testing, validation, and training. This has helped to prevent overfitting and improve the model’s ability to generalize. In addition, the MRI images are scaled to fit the pre-trained network’s input size. The proposed classification model investigates the effects of various image characteristics on an image classification scheme’s performance. The classifiers go through three different phases. The WEF feature model, the deep DenseNet-201 features model, and the feature combination of proposed WEF feature with the deep DenseNet-201 features using SVM classifier.

The hyperplane in an SVM classifier is chosen based on how well it divides the data into distinct classes. This hyperplane seeks to minimize the classification error while maximizing the margin between the two classes for binary classifications such as tumor vs non-tumor. Regularization parameter (C) and kernel hyperparameters for the radial basis function were employed in the applied SVM. These settings are adjusted for best results.

4.1 Results of classification using WEF feature model

Three classifiers are used to produce the test results: the long short-term memory (LSTM), CNN, and SVM as shown in Table 2. The LSTM connects memory blocks through its layers to recognize image features over time; however, it is used for sequential data. According to Table 2, the SVM classifier outperformed the other two classifiers. The rates generated by the three classifiers were nevertheless remarkably similar. The SVM classifier has several advantages. They are computationally efficient, making them suitable for image classification applications. Additionally, they are less likely to overfit, which happens when a model is overly complex and fits the training data too closely.

Table 2

Results of various classifiers using WEF feature model

Classifiers Accuracy (%) Sensitivity (%) Specificity (%)
LSTM 91.36 91.52 91.65
CNN 92.49 92.53 92.48
SVM 94.65 94.69 94.60

The high detection accuracy measures the overall accuracy of the detection method by calculating the percentage of correctly classified spliced and authentic images. Accuracy is an important metric to evaluate the performance of the detection algorithm. The proposed approach has a high detection capability on the BRATS 2019 dataset, which is a challenging database.

4.2 Ablation study of DenseNet-201 feature model

To extract features, the pre-trained DenseNet-201 was employed. The lower DB and upper DB in this network are where the features are extracted as illustrated in Table 3. There are four DBs with various numbers of convolutional layers in DenseNet-201. The goal is to extract features from the lower block2, middle block3, and end denseblock4 of the DenseNet-201’s bottom layers. Table 3 shows the accuracy of different dense features extracted from different blocks. When compared to DBs 2 and 3, the overall performance of the features extracted from DB 4 is superior.

Table 3

Average accuracy of different Dense201 blocks

DBs Accuracy
Block 2 90.37
Block 3 92.82
Block 4 95.85

4.3 Results of classification using proposed WEF with the deep DenseNet-201 features

The classification result of proposed WEF features is illustrated in Figure 4. The combined features extraction of proposed WEF handcrafted feature extraction model with pre-trained DenseNet-201 features succeeded to obtain good classification accuracy which reflects the benefits of applying the combined features extraction for detection in brain MRI images. The accuracy on BRATS MRI brain dataset, using the handcrafted WEF and the DenseNet-201 features was 94.65 and 95.85%, respectively. On the BRATS2019 MRI brain dataset, the feature combination of WEF handcrafted with DenseNet-201 was able to achieve 98.55% accuracy, which produced the best classification result. With a classification accuracy of 95.85%, pre-trained DenseNet-201 comes second. The proposed WEF handcrafted comes third with a classification accuracy of 94.65%. Figure 4 presents each model’s classification outcomes. When compared to single feature extraction techniques, the accuracy yielded by the suggested feature combination showed the best performance.

Figure 4 
                  The classification results of WEF handcrafted with pre-trained DenseNet-201 features on MRI dataset.
Figure 4

The classification results of WEF handcrafted with pre-trained DenseNet-201 features on MRI dataset.

4.4 Comparison with other works

To illustrate the efficacy of the suggested approach, a comparison was made with the previous studies, which investigated the diagnosis of brain MRI scans, to improve the differentiation between normal and abnormal cases. Table 4 shows the performance comparison with other research that applied same BRATS brain MRI dataset. The following existing methods were developed for brain classification in MRI images. Toğaçar et al.’s method [28] performed using an image augmentation technique with attention modules for brain MRI classification. The application of the data augmentation method improved the study’s classification results. Sameer et al.’s method [27] proposed three phases which include enhancement, segmentation, and classification. Image enhancement was applied first to enhance the input image, then segmentation, after that classification phase was conducted using 3D-CNN. Kuraparthi et al. method [5] proposed brain MRI classification model on three pre-trained CNNs. The SVM classifier is utilized for the classification of the extracted features. Moreover, Aziz et al.’s method [9] performed an optimum DL feature to classify multimodal brain tumors using the BRATS2019 dataset. The primary objective of this study is to enhance the overall accuracy of brain MRI image classification using the feature combination technique. Sharma et al. [29] proposed a model based on transfer learning for the classification and prediction of brain tumors. This model used a modified CNN architecture with normalization and data augmentation preprocessing methods. While Zahoor and Khan [30] developed Res-BRNet with boundary and regional operators to learn discriminative features that are beneficial to the proposed brain tumor MRI classification model. The created Res-BRNet was based on spatial and residual ideas to acquire feature-maps with a variety of rich information. The developed Res-BRNet outperforms the conventional CNN models and achieves excellent classification accuracy, according to experiments.

Table 4

Performance comparison on BRATS dataset

Methods Accuracy (%) Sensitivity (Recall) [%] Specificity (%)
Toğaçara et al. (AlexNet) [28] 87.93 84.38 92.31
Toğaçara et al. (VGG16) [28] 84.48 81.25 88.46
Sameer et al. [27] 96 95.90 98.14
Kuraparthi et al. (AlexNet) [5] 94.68 93.62 93.75
Kuraparthi et al. (VGG16) [5] 90.34 89.36 89.58
Aziz et al. [9] 87.80 87
Sharma et al. [29] 96 93.33 97.14
Zahoor and Khan [30] 79.01 96.76
Proposed WEF + trained DenseNet-201 98.55 98.50 98.59

According to the findings in Table 4, the proposed WEF feature extraction model with DL feature extraction improved classification accuracy when compared to other methods.

5 Conclusion

This study presented a novel mathematical model for classifying brain medical images using the WEF, and deep feature extraction. The brain image’s high frequency details are captured by the proposed Weierstrass coefficients of WEF, which calculates each pixel’s energy. While the DL feature extraction DenseNet-201 model was used to extract deep features. The novelty of this study is the WEF feature extraction mathematical model in addition to DL features. The BRATS dataset with a five-fold cross-validation scheme was used for the validation. The proposed brain medical image classification approach outperforms the comparator methods in experiments using brain MRI medical image sets. In this study, the classification success achieved was 98.55% accuracy for WEF with pre-trained DenseNet-201. More refined trained CNNs with parameter optimization at the same level of efficacy could be added in the future as a potential improvement. A limitation of the proposed feature extraction method is that the hand-crafted features are susceptible to noise and artifacts in the data, which can impair classification performance.

  1. Funding information: Authors state no funding involved.

  2. Author contributions: Conceptualization: I.A. and H.A.J.; methodology: H.A.J.; software and validation: I.A.; formal analysis and investigation: H.A.J.; resources and data curation: I.A.; writing – original draft preparation: H.A.J.; and writing – review and editing, and funding acquisition; I.A. All authors have read and agreed to the published version of the manuscript.

  3. Conflict of interest: No potential conflict of interest was reported by the authors.

  4. Ethical approval: No ethical approval was required for this work because the dataset Brats 2020 (Multimodal Brain Tumor Segmentation Challenge) used to build the for proposed MRI Brain Tumor Classification is freely available online, and no explanations were provided for individual images.

  5. Data availability statement: The dataset analyzed during this study is the standard brain tumor image dataset (BRATS), https://www.med.upenn.edu/cbica/brats2020/data.html.

References

[1] Mehmood A, Abugabah A, AlZubi AA, Sanzogni L. Early diagnosis of Alzheimer’s disease based on convolutional neural networks. Comput Syst Sci Eng. 2022;43(1):305–15.10.32604/csse.2022.018520Search in Google Scholar

[2] Alsheikhy TSaA. Classification of brain tumors using hybrid feature extraction based on modified deep learning techniques. Comput Mater Continua. 2023;77(1):426–43.10.32604/cmc.2023.040561Search in Google Scholar

[3] Hasan AM, Meziane F. Automated screening of MRI brain scanning using grey level statistics. Comput Electr Eng. 2016;53:276–91.10.1016/j.compeleceng.2016.03.008Search in Google Scholar

[4] Hasan AM, Qasim AF, Jalab HA, Ibrahim RW. Breast cancer MRI classification based on fractional entropy image enhancement and deep feature extraction. Baghdad Sci J. 2023;20(1):0221.10.21123/bsj.2022.6782Search in Google Scholar

[5] Kuraparthi S, Reddy MK, Sujatha C, Valiveti H, Duggineni C, Kollati M, et al. Brain tumor classification of MRI images using deep convolutional neural network. Trait du Signal. 2021;38(4):1171–9.10.18280/ts.380428Search in Google Scholar

[6] Iqbal S, Ghani MU, Saba T, Rehman A. Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN). Microsc Res Tech. 2018;81(4):419–27.10.1002/jemt.22994Search in Google Scholar PubMed

[7] Hasan AM, Jalab HA, Ibrahim RW, Meziane F, AL-Shamasneh AA, Obaiys SJ. MRI brain classification using the quantum entropy LBP and deep-learning-based features. Entropy. 2020;22(9):1033.10.3390/e22091033Search in Google Scholar PubMed PubMed Central

[8] Ibrahim RW, Hasan AM, Jalab HA. A new deformable model based on fractional Wright energy function for tumor segmentation of volumetric brain MRI scans. Comput Methods Prog Biomed. 2018;163:21–8.10.1016/j.cmpb.2018.05.031Search in Google Scholar PubMed

[9] Aziz A, Attique M, Tariq U, Nam Y, Nazir M, Jeong C-W, et al. An ensemble of optimal deep learning features for brain tumor classification. Comput Mater Continua. 2021;69(2):2653–70.10.32604/cmc.2021.018606Search in Google Scholar

[10] Chang I-Y, Huang T-Y. Deep learning-based classification for lung opacities in chest x-ray radiographs through batch control and sensitivity regulation. Sci Rep. 2022;12(1):1–8.10.1038/s41598-022-22506-4Search in Google Scholar PubMed PubMed Central

[11] Hasan AM, Jalab HA, Meziane F, Kahtan H, Al-Ahmad AS. Combining deep and handcrafted image features for MRI brain scan classification. IEEE Access. 2019;7:79959–67.10.1109/ACCESS.2019.2922691Search in Google Scholar

[12] Khan R, Akbar S, Mehmood A, Shahid F, Munir K, Ilyas N, et al. A transfer learning approach for multiclass classification of Alzheimer’s disease using MRI images. Front Neurosci. 2022;16.10.3389/fnins.2022.1050777Search in Google Scholar PubMed PubMed Central

[13] Jalab HA, Al-Shamasneh AA, Shaiba H, Ibrahim RW, Baleanu D. Fractional Renyi entropy image enhancement for deep segmentation of kidney MRI. CMC-Comput Mater Continua. 2021;67(2):2061–75.10.32604/cmc.2021.015170Search in Google Scholar

[14] Saeed S, Haroon HB, Naqvi M, Jhanjhi NZ, Ahmad M, Gaur L. A systematic mapping study of low-grade tumor of brain cancer and CSF fluid detecting approaches and parameters. In Approaches and Applications of Deep Learning in Virtual Medical Care. 2022. p. 236–59.10.4018/978-1-7998-8929-8.ch010Search in Google Scholar

[15] Wahlang I, Maji AK, Saha G, Chakrabarti P, Jasinski M, Leonowicz Z, et al. Brain magnetic resonance imaging classification using deep learning architectures with gender and age. Sensors. 2022;22(5):1766.10.3390/s22051766Search in Google Scholar PubMed PubMed Central

[16] Fayaz M, Haider J, Qureshi MB, Qureshi MS, Habib S, Gwak J. An effective classification methodology for brain MRI classification based on statistical features, DWT and blended ANN. IEEE Access. 2021;9:159146–59.10.1109/ACCESS.2021.3132159Search in Google Scholar

[17] Chaddad A. Automated feature extraction in brain tumor by magnetic resonance imaging using Gaussian mixture models. Int J Biomed Imaging. 2015;2015:1–11.10.1155/2015/868031Search in Google Scholar PubMed PubMed Central

[18] Khan HA, Jue W, Mushtaq M, Mushtaq MU. Brain tumor classification in MRI image using convolutional neural network. Math Biosci Eng. 2020;17(5):6203–16.10.3934/mbe.2020328Search in Google Scholar PubMed

[19] Rehman A, Naz S, Razzak MI, Akram F, Imran M. A deep learning-based framework for automatic brain tumors classification using transfer learning. Circuits Syst Signal Process. 2020;39(2):757–75.10.1007/s00034-019-01246-3Search in Google Scholar

[20] Aamir M, Rahman Z, Dayo ZA, Abro WA, Uddin MI, Khan I, et al. A deep learning approach for brain tumor classification using MRI images. Comput Electr Eng. 2022;101:108105.10.1016/j.compeleceng.2022.108105Search in Google Scholar

[21] Noreen N, Palaniappan S, Qayyum A, Ahmad I, Imran M, Shoaib M. A deep learning model based on concatenation approach for the diagnosis of brain tumor. IEEE Access. 2020;8:55135–44.10.1109/ACCESS.2020.2978629Search in Google Scholar

[22] Jalab HA, Ibrahim RW. Fractional conway polynomials for image denoising with regularized fractional power parameters. J Math Imaging Vis. 2015;51(3):442–50.10.1007/s10851-014-0534-zSearch in Google Scholar

[23] Jalab HA, Ibrahim RW. Fractional alexander polynomials for image denoising. Signal Process. 2015;107:340–54.10.1016/j.sigpro.2014.06.004Search in Google Scholar

[24] Multimodal Brain Tumor Segmentation Challenge; 2019. https://www.med.upenn.edu/cbica/brats2019/data.html.Search in Google Scholar

[25] Porubov A, Velarde M. Exact periodic solutions of the complex Ginzburg–Landau equation. J Math Phys. 1999;40(2):884–96.10.1063/1.532692Search in Google Scholar

[26] Miller SS, Mocanu PT. Differential subordinations: theory and applications. MDPI, Basel, Switzerland: CRC Press; 2000.10.1201/9781482289817Search in Google Scholar

[27] Sameer MA, Bayat O, Mohammed HJ, eds. Brain tumor segmentation and classification approach for MR images based on convolutional neural networks. 2020 1st Information Technology To Enhance e-learning and Other Application (IT-ELA). IEEE; 2020.10.1109/IT-ELA50150.2020.9253111Search in Google Scholar

[28] Toğaçar M, Ergen B, Cömert Z. BrainMRNet: Brain tumor detection using magnetic resonance images with a novel convolutional neural network model. Med Hypotheses. 2020;134:109531.10.1016/j.mehy.2019.109531Search in Google Scholar PubMed

[29] Sharma S, Gupta S, Gupta D, Juneja A, Khatter H, Malik S, et al. Deep learning model for automatic classification and prediction of brain tumor. J Sens. 2022;2022:1–11.10.1155/2022/3065656Search in Google Scholar

[30] Zahoor MM, Khan SH. Brain tumor MRI classification using a novel deep residual and regional CNN. arXiv preprint arXiv:221116571. 2022.10.21203/rs.3.rs-2369069/v1Search in Google Scholar

Received: 2024-02-08
Accepted: 2024-03-27
Published Online: 2024-06-19

© 2024 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 6.3.2025 from https://www.degruyter.com/document/doi/10.1515/jisys-2024-0106/html
Scroll to top button