International Journal of Research and Scientific Innovation (IJRSI)

Submission Deadline-23rd December 2024
Last Issue of 2024 : Publication Fee: 30$ USD Submit Now
Submission Deadline-05th January 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-20th December 2024
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Classification of Brain Tumors Using MRI Images Based on Fine-Tuned Pretrained Models

  • Aliyu Tetengi Ibrahim
  • Mohammed Tukur Mohammed
  • Mohammed Awal Suleiman
  • Abdulaziz Bello Kofa
  • Idris Muhammad Ladan
  • 353-368
  • Apr 7, 2024
  • Medicine

Classification of Brain Tumors Using MRI Images Based on Fine-Tuned Pretrained Models

Aliyu Tetengi Ibrahim1, Mohammed Tukur Mohammed2, Mohammed Awal Suleiman3, Abdulaziz Bello Kofa4, Idris Muhammad Ladan5.

1Department of Computer Science, Ahmadu Bello University, Zaria, Nigeria, 2National Cereals Research Institute Badeggi, Nigeria, 3Computer Science Department, Federal Polytechnic Bida, Nigeria, 4Technical Services, Galaxy Backbone Limited, Nigeria, 5Department of Computer Science, UNITe S Cisco Networking Academy, Nigeria

DOI: https://doi.org/10.51244/IJRSI.2024.1103025

Received: 20 February 2024; Accepted: 05 March 2024; Published: 07 April 2024

ABSTRACT

Brain tumors are frequently diagnosed malignant growths found across all age groups, and they pose a significant threat if not detected promptly. Assessing their severity presents a challenge for radiologists during health monitoring and automated identification. Detecting and categorizing affected areas using Magnetic Resonance Imaging (MRI) scans is crucial. Various types of tumors such as glioma, meningioma, pituitary, and benign tumors exist. Additionally, manual procedures are inefficient, error-prone, and time-consuming. Thus, there’s a pressing need for a reliable solution to ensure precise diagnosis. A CNN (Convolutional Neural Network), a cutting-edge technique in deep learning, has been employed for brain tumor detection using MRI images. However, challenges persist in training due to a limited number of images, which is insufficient for CNN’s requirements. To address this limitation, transfer learning techniques have been utilized. Additionally, image augmentation methods have been applied to increase the dataset size and improve model performance. This study introduces a modified deep CNN incorporating transfer learning techniques and a learning rate scheduler for brain tumor classification. Four pretrained models—Xception, Dense Net 201, Mobile Net, and Inception Res Net V2—serve as base models. Each model is trained individually multiple times, both with and without a learning rate scheduler, while employing two optimizers—Adam and Adamax—independently to assess performance. The training process is conducted in four stages: (i) using a static learning rate with the Adam optimizer, (ii) using a static learning rate with the Adamax optimizer, (iii) employing a dynamic learning rate with the Adam optimizer, and (iv) utilizing a dynamic learning rate with the Adamax optimizer. To enhance the model’s depth and complexity for extracting more relevant features from the images, two additional dense layers are incorporated into each of the models, all utilizing Leaky Re LU as the activation function. The proposed model is trained and validated using an MRI image dataset, which is publicly available. Following test result analysis, it was found that Inception Res Net V2, without the application of a learning rate scheduler, outperformed other models, achieving an accuracy of 96.58%. Additionally, it attained precision, recall, and F1 score values of 96.59%, 96.58%, and 96.57% respectively.

Keywords: Brain tumor, Image classification, Magnetic Resonance Imaging, Deep Learning, Transfer Learning, Convolutional Neural Network, Pretrained Models.

INTRODUCTION

Cancer ranks among the primary factors contributing to global mortality, posing a substantial obstacle to enhancing life expectancy. Brain tumors develop due to the proliferation of irregular cells within the brain, resulting in harm to crucial brain tissues and potentially leading to cancerous growth [1]. Human brain tumors encompass approximately 150 diverse types, classified into (i) noncancerous tumors and (ii) cancerous tumors [2]. According to relevant literature [3], brain tumors are typically classified into two stages: primary and secondary. In the primary stage, tumors are characterized by small sizes and are termed benign in biological parlance. Conversely, in the secondary stage, tumors originate from other body parts, are larger than benign tumors, and are referred to as malignant [4] While benign tumors progress slowly and remain localized, malignant tumors grow rapidly and have invasive properties [5]. Among malignant tumors, gliomas, meningiomas, and pituitary tumors are the most prevalent types [6]. Gliomas stem from glial cells in the brain, whereas meningioma tumors typically arise from the protective membrane covering the brain and spinal cord [7]. Pituitary brain tumors, which develop in the pituitary glands—a crucial layer of the brain responsible for producing essential hormones—are generally benign. Various techniques are employed in clinical settings for brain tumor treatment [8]. In the benign stage, radiotherapy is often effective, and patients may avoid surgery while still achieving survival [9]. Conversely, the cancerous stage poses significant risks and typically requires a combination of chemotherapy and radiotherapy for treatment [10]. Consequently, benign tumors tend to spread more slowly than malignant ones. Nevertheless, prompt and accurate diagnosis is paramount, necessitating the expertise of radiologists [11].

In clinical settings, MRI stands out as the most effective tool for in vivo and noninvasive visualization of both the anatomy and functionality of brain tumors [12]. Early diagnosis and precise classification of these tumors are critical for potentially saving lives. Manual techniques, however, pose significant challenges and account for approximately 10-30% of all misdiagnoses. Therefore, the utilization of Computer-Aided Diagnosis (CAD) is imperative in aiding radiologists to enhance accuracy and reduce the time required for image classification [13].

In the field of radiology, the integration of artificial intelligence (AI) has been shown to reduce error rates beyond human capabilities [14]. Machine learning and deep learning, both subsets of AI, empower radiologists to swiftly locate and classify tumors without resorting to surgical intervention [15]. Among deep learning approaches, Convolutional Neural Networks (CNNs) have emerged as particularly promising, achieving significant success in medical imaging tasks [16]. CNNs excel at automatically extracting relevant features from images while reducing dimensionality [17]. Consequently, the need for manually crafted features diminishes as CNNs autonomously discern crucial features to generate accurate predictions. However, despite the effectiveness of CNNs in diagnosing medical images, they demand a substantial volume of data for training to deliver optimal performance. The availability of clean medical images for training purposes is limited, which poses a challenge. Models trained on a restricted dataset face the risk of memorizing specific examples rather than learning generalizable features, leading to issues such as poor generalization, bias, overfitting, and suboptimal performance on unseen data. To mitigate the high training time and computational demands associated with building CNN architectures from scratch, pretrained models with stored weights, like Efficient Net [18]. Dense Net [19], and Res Net [20], are increasingly utilized for brain tumor classification. Recent advancements in CNN models have revolutionized medical imaging, enabling the diagnosis, detection, and classification of various serious human conditions such as brain tumors [21], skin lesions [22], breast cancer [23], COVID-19 [24], diabetic retinopathy [25], and arrhythmias [26]. However, categorizing brain tumors poses significant challenges due to the dynamic changes in morphological structure, complex tumor appearances in images, and irregular lighting effects. Thus, effective techniques for brain tumor classification are crucial to assist radiologists in their decision-making process. Each year, new classification methodologies are introduced to address limitations in previous approaches. The proposed method aims to reduce training time and incorporates strategies such as data augmentation, improved fine-tuning, and enhanced optimization techniques to enhance model training. The specific contributions of this study include:

  1. Each of the four pre-trained deep learning models—Xception, Dense Net 201, Mobile Net, and Inception Res Net V2—was fine-tuned using a deep transfer learning approach. Training was performed on imbalanced data while incorporating data augmentation techniques. From the trained models, features were extracted from the global average pooling layer, capturing comprehensive information for each tumor type. This methodology aims to deliver dependable and accurate tumor classification, assisting radiologists in forming precise diagnostic opinions.
  2. All four models underwent training, validation, evaluation, and comparison using both Adam and Adamax optimization techniques. Static and dynamic learning rate methods were applied during the training of each pretrained model. Both learning rate techniques were tested with both optimizers utilized in this study, and the outcomes were juxtaposed for analysis.
  3. The performances of the four pretrained models were compared across four stages to determine the model with superior performance. The results show that the proposed model outperforms certain previous methodologies, achieving higher accuracy levels on benchmark datasets.

RELATED WORKS

Brain tumors represent one of the most lethal forms of cancer across all age groups. Their classification poses a significant challenge for radiologists involved in healthcare monitoring and automated diagnosis. Recent research has introduced numerous machine learning-based methods for brain tumor classification (BTC), aiming to assist radiologists in conducting more precise diagnostic evaluations [27]. Machine learning and deep learning are the primary techniques employed for this purpose [28]. Within machine learning, various approaches such as k-nearest neighbor (KNN), support vector machines (SVM), decision trees, and genetic algorithms have been utilized in different studies [29]. While many techniques in the literature focus on binary class classification, distinguishing between benign and malignant tumors, this task is relatively straightforward due to the clear interpretation of tumor shape and texture [30]. In contrast, multiclass classification poses greater difficulty due to the high similarity among different tumor types [31].

In their research, [27] introduced an automated method for brain tumor detection using magnetic resonance imaging (MRI). Initially, brain MRI images undergo preprocessing to enhance their visual quality. Subsequently, two distinct pre-trained deep learning models are employed to extract robust features from the images. These resulting feature vectors are then amalgamated to create a hybrid feature vector utilizing the partial least squares (PLS) technique. Next, the primary tumor locations are identified via agglomerative clustering. Finally, these proposals are standardized to a predetermined size and forwarded to the head network for classification. Compared to existing methodologies, the proposed approach achieved an impressive classification accuracy of 98.95%.

The findings of [32] proposed a deep learning model for brain tumor detection in MRI images, aiming for increased accuracy. Their approach involves combining a Convolutional Neural Network (CNN) with a Long Short-Term Memory (LSTM). LSTMs enhance CNN’s feature extraction capabilities, particularly beneficial for image classification tasks. The proposed LSTM-CNN model outperformed standard CNN classification methods, achieving the highest accuracy of 92%.

The research of [33] introduced a method called Border Collie Firefly Algorithm-based Generative Adversarial Network (BCFA-based GAN) using the Spark framework to classify the severity level of brain tumors effectively. Their approach involves employing a set of slave nodes and a master node for severity classification. Pre-processing is conducted using a Laplacian filter to remove noise from the images. Feature extraction occurs on the slave nodes, and the extracted features are then inputted into a Support Vector Machine (SVM) for tumor classification on the master node. Finally, the results are fed into the BCFA-based GAN for severity level classification. The proposed BCFA-based GAN demonstrated superior performance, achieving high accuracy (97.515%), sensitivity (97.515%), and specificity (97.515%).

The study of [34] introduced a multi-grade brain tumor classification system based on a Convolutional Neural Network (CNN) model. Initially, deep learning techniques are utilized to segment tumor regions from MR images. Subsequently, extensive data augmentation is implemented to enhance the training of the system, addressing the challenge of limited data availability when dealing with MRI for multi-grade brain tumor classification. Finally, a pre-trained CNN model is fine-tuned using augmented data for classifying brain tumor grades. The proposed system undergoes experimental evaluation on both augmented and original data, demonstrating convincing performance compared to existing methods with an accuracy of 94.58%.

Reference [31] introduced a novel automated deep learning approach for multiclass brain tumor classification. The method involves fine-tuning the Densenet 201 Pre-Trained Deep Learning Model and training it using a deep transfer learning approach with imbalanced data. Two feature selection techniques are proposed: Entropy–Kurtosis-based High Feature Values (EKb HFV) and a Modified Genetic Algorithm (MGA) based on metaheuristics. The features selected by the genetic algorithm are further refined using a new threshold function proposed in the study. Subsequently, the features obtained from both EKbHFV and MGA approaches are combined using a non-redundant serial-based fusion method and classified using a multi class SVM cubic classifier. Experimental evaluations conducted on two datasets, BRATS 2018 and BRATS 2019, achieved an accuracy exceeding 95% without data augmentation.

Reference [21] devised an efficient automated method for brain tumor classification to aid radiologists, reducing the time required for precise diagnosis. Their approach utilized 3064 T1-weighted contrast-enhanced brain MR images (T1W-CE MRI) from 233 patients. They employed five fine-tuned pre-trained models, including Google Net, Alex Net, Shuffle Net, Squeeze Net, and NAS Net-Mobile, to evaluate their performance in classifying various brain tumor categories. The proposed CNN models incorporated layers from pre-trained networks, with the last three layers replaced to accommodate the new image classes (meningioma, pituitary, and glioma). However, in the pretrained Squeeze Net, the 1-by-1 convolutional layer was substituted instead of the last learnable 1-by-1 convolutional learnable layer, maintaining the same number of filters as the number of classes. Finally, a majority voting technique was employed to combine the outputs of the five models, treating them as a decision-making committee. The proposed system exhibited significant improvement, achieving an overall accuracy of 99.31%.

The work of [35] introduced a precise and optimized system for brain tumor detection. The system encompasses preprocessing, segmentation, feature extraction, optimization, and detection stages. Preprocessing involves the utilization of a compound filter composed of Gaussian, mean, and median filters. Image segmentation is performed using threshold and histogram techniques. Feature extraction employs the Grey Level Co-occurrence Matrix (GLCM). Optimal feature selection is achieved through the application of whale optimization and grey wolf optimization methods. Brain tumor detection is accomplished using a CNN classifier, resulting in an accuracy of 98.9%.

The study of [36] proposed an MRI-based brain tumor detection approach using convolutional deep learning methods and selected machine learning techniques to enhance performance. Initially, preprocessing and augmentation algorithms were employed on MRI brain images. Subsequently, they introduced a new 2D Convolutional Neural Network (CNN) and a convolutional auto-encoder network, both pre-trained with their respective hyperparameters. The network architecture comprised eight convolutional and four pooling layers, with batch-normalization layers applied after all convolutional layers. The modified auto-encoder network consisted of a convolutional auto-encoder network followed by a convolutional network for classification, utilizing the last output encoder layer of the initial part. Additionally, six machine learning techniques were utilized for brain tumor classification, and their results were compared.

The research of [37] developed a brain tumor diagnosis system based on deep learning, employing a CNN architecture known as Efficient Netv 2S. They enhanced this architecture with Ranger optimization and comprehensive preprocessing techniques. The system comprises four stages: (1) Pre-processing, (2) Data generation, (3) CNN framework or deep feature extraction for tumor detection, and (4) Diagnosis. Upon evaluating the model’s performance using the test dataset, it achieved an accuracy exceeding 98%.

MATERIAL AND METHODS

This section outlines the materials and methods employed in the study.

  • Dataset and pre‑processing

The Brain Tumor MRI image Dataset, available on Kaggle, was utilized for both training and evaluating the model in this study. This dataset consists of 2,611 images distributed among four classes: glioma tumors (741 images), no tumor (400 images), meningioma tumors (749 images), and pituitary tumors (721 images). Each image was resized to 224 by 224 pixels to ensure compatibility with the proposed model. The dataset was divided into an 80:20 ratio for training and testing. Table 1 presents statistics for the images used in this study, and Figure 1 displays samples from the brain tumor images.

Table I. Summary of the Brain Tumor Images Used in this Study

Brain Tumor Images Classes Number of Images
Glioma tumor 741
Healthy tumor 400
Meningioma tumor 749
Pituitary tumor 721
Total 2611

Figure 1. Sample images from brain tumor. Glioma tumor (a, b and c), Healthy tumor (d, e and f), Meningioma tumor (g, h, and i), Pituitary tumor (j, k, and l).

  • Proposed methodology

This work presents an effective deep learning framework for multi class brain tumor classification, utilizing transfer learning techniques with five pretrained models. To mitigate potential over fitting and reduce the total parameter count, a Global Average Pooling (GAP) layer is integrated. The proposed method comprises five main steps: preprocessing of brain tumor images (including resizing, normalizing, and splitting), image augmentation (utilizing shear range, zoom range, and brightness range), image feature extraction using each of the four pretrained models (Xception, Dense Net 201, Mobile Net, and Inception Res Net V2), and classification using Soft Max. The performance of each architecture is assessed and compared on test image instances using two optimizers: Adam and Adamax, each tested with static and dynamic learning rates. The flowchart and architecture of the proposed model are illustrated in Figures 2 and 3.

  • Pretrained model

Pretrained models, also known as Transfer Learning (TL), are neural network models trained on large datasets, typically for specific tasks such as image classification, object detection, or natural language processing, and then saved along with their learned parameters. These pretrained models serve as initial points for further fine-tuning on smaller datasets or as feature extractors in transfer learning. Transfer learning utilizes features already learned to address one problem as a starting point for solving other problems, leveraging previously acquired knowledge to learn new data [38]. These models are trained on 1.28 million images (ImageNet) to predict 1000 classes. Transfer learning is particularly beneficial when the target training data are limited compared to the source training data [31].Consider, the source data with the learning task P_s={(d_1^s 〖,e〗_1^s ),….,(d_i^s,e_i^s ),……,(d_n^s,e_n^s)} and source labels are P_L=L_P,L_S,(d_m^s,e_m^s)∈R. Also, the target data with the learning task J_T={(d_1^t 〖,e〗_1^t ),….,(d_i^t,e_i^t ),……,(d_m^t,e_m^t)} and target labels J_L=L_J,L_T,(d_n^t,e_n^t)∈R. Where n≪m and e_1^p and e_1^T are the labels of training data. The target goal of TL is to make the more learnable of J_T by the combined knowledge P_S and J_T. Hence, transfer learning can be explained as:

PS≠JT andPL≠JL                                         (1)

1) Xception: The work of [39] introduced the Xception architecture, which is characterized by a linear combination of depth-wise separable convolution layers with residual connections, aiming to reduce the complexity of the architecture. Compared to other deep convolutional networks, the Xception model prioritizes the efficient utilization of model parameters. It replaces the inception modules with depth-wise separable convolutions [40]. Essentially, the Xception model extends the Inception architecture [41].

2) Inception Res Net V2: The study of [42] introduced Inception Res Net V2, which simplified the inception blocks significantly. It is derived from the InceptionV3 model and incorporates concepts from Res Net models [40]. Inception Res Net V2 has demonstrated accelerated training of Inception networks by incorporating residual connections, thereby enhancing model performance.

3) Mobile Net: The Mobile Net model is specifically designed for mobile and embedded vision applications due to its lightweight deep neural network architecture. It employs depth-wise separable convolutions to construct compact networks, resulting in fewer parameters compared to standard convolutions. This architecture reduces computational costs and complexity, making it highly efficient for vision-related tasks with rapid processing [43][44].

4) Densenet-201: Dense Net-201 is a pretrained CNN model comprising 201 layers, belonging to the Dense Net family of architectures. Reference [19] introduced this model, characterized by a unique connectivity pattern where each layer is connected to every other layer in a feed-forward manner. This connectivity facilitates feature reuse, promotes efficient information flow, and mitigates issues such as the vanishing-gradient problem. Dense Net architectures are renowned for their exceptional performance and efficiency, particularly in tasks such as image classification and object detection.

  • Optimizers

Several optimization algorithms based on deep learning and CNN frameworks are available, and in our proposed model, two optimizers have been utilized and compared to achieve the highest accuracy. These optimizers are Adamax and Adam optimizer. Adam optimizer is a combination of Stochastic Gradient Descent and RMS Prop with momentum. It computes the learning rate for each parameter individually. On the other hand, Adamax is derived from the Adam optimizer and calculates gradients from the first two moments. Adamax is particularly advantageous when dealing with models that have embeddings, making it more useful compared to other optimizers [45].

 Adam∶ mn=E[A]                                        (2)

where  stands for the moment of that certain variable, A is any variable and E is expected value of n variable.

                                       (3)

where vt stands for updating of norms and ut examines the extensiveness of norm vt

  • Leaky Re LU activation function

Acting as the activation function of neural network, the leaky ReLU function is widely used in the field of artificial intelligence and machine learning, especially in deep learning [46]. Leaky Rectified Linear Unit, or Leaky ReLU, is a type of activation function based on a ReLU, but it has a small slope for negative values instead of a flat slope. It allows a small amount of gradient information to flow when the input is negative. It was introduced to prevent ReLU from creating dead neurons, i.e. those that are stuck at always outputting zero [48]. The expression of leaky ReLU function is:

                            (4)

Where a is a small positive constant, typically selected to be approximately 0.01. This minor adjustment enables a small gradient when the unit is inactive (i.e., when x≥0 ) which can mitigate the dying ReLU problem, where neurons may become inactive during training and fail to recover.

Compared to its predecessors such as the Sigmoid function and TanH function [46], the leaky ReLU is considerably simpler and can address the issue of gradient vanishing. Additionally, calculating the derivative of the leaky ReLU function (i.e. 1 or a) is much simpler.

Mathematically, the leaky Re LU function has C0 continuity, and the origin of the function f(x) |(x=0) is non-differentiable. In practical application, the derivative at origin [46], f'(x)|(x=0) is set as follows:

                                   (5)

From (4), it can be noticed that f(x) |(x=0)=1. Thus (5) means derivative of origin f(x) |(x=0) is set as 1. The symbol “⇐” demotes value assignment.

  • Soft Max

For multi-classification tasks, the traditional SoftMax classifier demonstrates exceptional performance. It effectively normalizes various features based on the number of classes, enhancing the clarity of positive features [47]. The SoftMax function serves as a widely used activation function in neural networks, especially in the output layer for classification purposes. It takes a vector of arbitrary real-valued scores as input and transforms them into a probability distribution that sums to 1. If xi represents the ith input feature and yi represents the label [25], then given an input vector z=(z1,z2,….zn), the SoftMax function outputs a vector σ(z)=( σ(z1 ),σ(z2 ),….,zn) where:

                                                 (6)

For every element zi in the input vector, each element of the output vector is determined by taking the exponential of the corresponding element in the input vector and then dividing it by the sum of the exponentials of all elements in the input vector. This normalization guarantees that the output vector accurately represents a valid probability distribution.

Figure 2. Flowchart diagram for the proposed model

Figure 3. Architecture for the proposed model

  • Evaluation metrics

After training, the performance of the proposed model was assessed. Given the presence of four categories in the brain tumor image dataset, multi-class classification was carried out. Several performance metrics were defined for evaluating the classifier. One commonly used metric is classification accuracy. Additionally, statistical measures such as system accuracy, specificity, sensitivity, precision, and F1-score were employed to assess the model’s performance by quantifying the predicted classes using the following quantities: number of true positives (TP), number of true negatives (TN), number of false positives (FP), and number of false negatives (FN). The expressions for the evaluation metrics utilized are provided below:

Accuracy=(TP+TN)/(TP+TN+FP+FN)                                                                                                                                      (7)

Precision=TP/(TP+FP)                                                                                                                                                   (8)

Recall=TP/(TP+FN)                                                                                                                                                        (9)

F1-Score=(2*TP)/(2*TP+FP+FN)                                                                                                                                        (10)

  • Experimental setup

The pretrained models used in this study were compiled with GPU support. All experimental investigations took place within the Kaggle cloud environment, utilizing a setup comprising 2 CPU cores, 13 gigabytes of RAM, and 1 Nvidia Tesla P100 GPU. The implementation of all code was conducted using the TensorFlow Keras framework, a Python-based open-source deep neural network library. Details regarding the specific parameters and their corresponding values employed for training each pretrained model are provided in Table 2.

RESULT AND DISCUSSION

In this section, we conducted experimental evaluations of the proposed approach for multi-grade brain tumor classification. We conducted multiple experiments on a publicly available brain tumor MRI image dataset. The main aim of this study was to compare the performance of four pretrained models. While the dataset comprised a total of 2,611 images, it was considered insufficient for deep learning methods due to its size. To mitigate this issue, image augmentation techniques such as zoom range, shear range, and brightness range were applied to augment the dataset and increase the number of images used for model training. The specific values for each augmentation technique are outlined in Table 2. Prior to applying image augmentation techniques, the image dataset was divided into an 80:20 ratio for training and testing purposes.

The pretrained models’ layers were frozen to maintain their weights, leaving only a few layers trainable to facilitate learning new features from the images. A brute force approach was utilized to test multiple epochs, ultimately deciding on 40 epochs. A batch size of 32 was employed, and both Adam and Adamax optimizers were used with an initial learning rate of 0.0001, which could be updated if the model’s performance declined. Validation accuracy was monitored during training to determine when the learning rate should be adjusted. All four chosen model architectures from the pretrained models were trained using the same dataset and parameters, and the results were compared to identify the best-performing model. The performance of each pretrained model was evaluated using four stages as outlined below.

  • Model evaluation with the application of dynamic learning rate

During this stage, each of the four deep learning models underwent training, validation, and testing with the implementation of dynamic learning rates, adjusting the rate when model performance stagnated. Evaluation revealed that the Mobile Net model surpassed others, achieving 96.43% accuracy with Adam optimizer and 95.46% with Adamax. It outperformed the Xception model only in precision with Adamax. However, across all models, performance was generally better with Adam compared to Adamax, with Dense Net 201 exhibiting lower performance in both cases. Results for Adam and Adamax are summarized in Table 3, Table 4, and Figure 4.

Table II. Summary of the parameters used to train the model including augmentation techniques

Parameter Name Parameter Value
Learning rate 0.0001
Epoch 40
Optimizer 1 Adam
Optimizer 2 Adamax
Loss Categorical cros sentropy
Kernel regularizer L2(0.0001)
Activation Leaky ReLU
Batch size 32
Shear range 0.2
Zoom range 0.2
Brightness range [0.2,1.0]
  • Model evaluation without the application of dynamic learning rate

During this phase, each of the four deep learning models underwent training, validation, and testing without utilizing dynamic learning rates. In contrast to the previous stage, the Inception Resnet V2 model with Adam optimizer outperformed all others, achieving an accuracy of 96.58% based on our analysis. This suggests that the application of dynamic learning rates had minimal positive impact on model performance. When evaluated with Adamax optimizer in this phase, the Mobile Net pretrained model surpassed others, achieving an accuracy of 95.92%. Performance analysis results are depicted in Table 5, Table 6, and Figure 5.

  • Other Evaluations

To further evaluate model performance, only the Inception Res Net V2 and Mobile Net models were considered, as shown in Figure 7, where their confusion matrices were analyzed to determine the number of correct and incorrect predictions. Additionally, to visualize the models’ performance throughout training, Figures 6 illustrates the training 

Table III. Evaluation of the model with the use of dynamic learning rate and Adam.

Model Accuracy F1 Score Recall Precision
Inception Res Net V2 96.32 96.32 96.32 96.35
Xception 96.17 96.16 96.17 96.21
Mobile Net 96.43 96.41 96.43 96.43
Dense Net 201 94.49 94.47 94.49 94.50

Table IV. Evaluation of the model with the use of dynamic learning rate and Adamax.

Model Accuracy F1 Score Recall Precision
Inception Res Net V2 94.74 94.72 94.74 94.75
Xception 95.46 95.44 95.46 95.46
Mobile Net 95.46 95.44 95.46 95.50
DenseNet201 93.77 93.74 93.77 93.77

(a)                                                                                     (b)

Figure 4. Performance of the model without learning rate scheduler and (a) adam, (b) adamax

Table V. Evaluation of the model without the use of dynamic learning rate but with Adam.

Model Accuracy F1 Score Recall Precision
Inception Res Net V2 96.58 96.57 96.58 96.59
Xception 96.38 96.36 96.38 96.41
Mobile Net 96.53 96.52 96.53 96.53
Dense Net 201 94.38 94.36 94.38 94.41

Table VI. Evaluation of the model without the use of dynamic learning rate but with Adamax.

Model Accuracy F1 Score Recall Precision
Inception Res Net V2 95.41 95.39 95.41 95.40
Xception 95.92 95.90 95.92 95.92
Mobile Net 95.87 95.85 95.87 95.88
Dense Net 201 94.08 94.06 94.08 94.09

(a)                                                                                     (b)

Figure 5. Performance of the model with learning rate scheduler and (a) adam, (b) adamax

(a)                                                                          (b)

(c)                                                                          (d)

Figure 6. (a) Accuracy curve Inception Res Net V2, (b) Loss curve Inception Res Net V2, (c) Accuracy curve Mobile Net, and (d) Loss curve Accuracy curve Mobile Net

(a)                                                                       (b)

Figure 7. (a) Confusion matrix for Inception Res Net V2, (b) Confusion matrix for Mobile Net and validation accuracy, as well as the training and validation loss from the initial to the final epoch. It was observed that while Inception Res Net V2 outperformed Mobile Net, the latter made no mis classifications against Pituitary tumors and demonstrated smoother training and validation accuracy curves.

CONCLUSION AND FUTURE WORK

This paper introduces a transfer learning technique based on deep learning models for classifying brain tumors. The model’s performance was assessed in four stages: training and evaluation using Adam with dynamic learning rate, Adamax with dynamic learning rate, Adam without dynamic learning rate, and Adamax without dynamic learning rate. Four pretrained CNN models (Xception, Dense Net 201, Mobile Net, and Inception Res Net V2) were fine-tuned for multi-grade brain tumor classification. Considering both average accuracy and precision metrics with both Adam and Adamax optimizers, Inception Res Net V2 and Mobile Net demonstrated superior performance compared to other CNN architectures. Inception Res Net V2 achieved 96.58% accuracy and 96.59% precision, while Mobile Net achieved 96.43% accuracy and 96.43% precision, respectively.

Future plans include improving the brain tumor image dataset by implementing techniques like image segmentation during preprocessing and using Generative Adversarial Networks (GAN) for augmentation. Additionally, there is a consideration to transition from categorical to binary classification methodology to enhance the model’s performance assessment, especially in more complex scenarios. This strategic shift aims to enable the development of models capable of delivering more accurate predictions, particularly in demanding environments.

ACKNOWLEDGMENT

The successful completion of this research project was a joint endeavor, utilizing the distinctive skills, dedication, and contributions of each author. We would like to express our sincere gratitude to all those who contributed to this research. Additionally, we acknowledge the Department of Computer Science, Ahmadu Bello University, Zaria and Department of Computer Science Federal Polytechnic Bida for providing resources and facilities essential for conducting this study.

REFERENCES

  1. Hossain, A., Islam, M. T., Abdul Rahim, S. K., Rahman, M. A., Rahman, T., Arshad, H., … & Chowdhury, M. E. (2023). A Lightweight Deep Learning Based Microwave Brain Image Network Model for Brain Tumor Classification Using Reconstructed Microwave Brain (RMB) Images. Biosensors, 13(2), 238.
  2. Pradhan, A., Mishra, D., Das, K., Panda, G., Kumar, S., & Zymbler, M. (2021). On the classification of MR images using “ELM-SSA” coated hybrid model. Mathematics, 9(17), 2095.
  3. Davis, F. G., Malmer, B. S., Aldape, K., Barnholtz-Sloan, J. S., Bondy, M. L., Brannstrom, T., … & Buffler, P. A. (2008). Issues of diagnostic review in brain tumor studies: from the Brain Tumor Epidemiology Consortium. Cancer Epidemiology Biomarkers & Prevention, 17(3), 484-489.
  4. Sharif, M., Amin, J., Raza, M., Yasmin, M., & Satapathy, S. C. (2020). An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor. Pattern Recognition Letters, 129, 150-157.
  5. Rasool, M., Ismail, N. A., Boulila, W., Ammar, A., Samma, H., Yafooz, W. M., & Emara, A. H. M. (2022). A hybrid deep learning model for brain tumour classification. Entropy, 24(6), 799.
  6. Abd El-Wahab, B. S., Nasr, M. E., Khamis, S., & Ashour, A. S. (2023). BTC-fCNN: Fast Convolution Neural Network for Multi-class Brain Tumor Classification. Health Information Science and Systems, 11(1), 3.
  7. Louis, D. N., Perry, A., Reifenberger, G., Von Deimling, A., Figarella-Branger, D., Cavenee, W. K., … & Ellison, D. W. (2016). The 2016 World Health Organization classification of tumors of the central nervous system: a summary. Acta neuropathologica, 131, 803-820.
  8. Fernandes, S. L., Tanik, U. J., Rajinikanth, V., & Karthik, K. A. (2020). A reliable framework for accurate brain image examination and treatment planning based on early diagnosis support for clinicians. Neural Computing and Applications, 32, 15897-15908.
  9. Durand, T., Bernier, M. O., Léger, I., Taillia, H., Noël, G., Psimaras, D., & Ricard, D. (2015). Cognitive outcome after radiotherapy in brain tumor. Current Opinion in Oncology, 27(6), 510-515
  10. DeAngelis, L. M. (2005). Chemotherapy for brain tumors—a new beginning. New England Journal of Medicine, 352(10), 1036-1038.
  11. Mittal, M., Goyal, L. M., Kaur, S., Kaur, I., Verma, A., & Hemanth, D. J. (2019). Deep learning based enhanced tumor segmentation approach for MR brain images. Applied Soft Computing, 78, 346-354.
  12. Wang, S., Feng, Y., Chen, L., Yu, J., Van Ongeval, C., Bormans, G., … & Ni, Y. (2022). Towards updated understanding of brain metastasis. American Journal of Cancer Research, 12(9), 4290.
  13. Dutta, P., Upadhyay, P., De, M., & Khalkar, R. G. (2020, February). Medical image analysis using deep convolutional neural networks: CNN architectures and transfer learning. In 2020 International Conference on Inventive Computation Technologies (ICICT) (pp. 175-180). IEEE.
  14. McBee, M. P., Awan, O. A., Colucci, A. T., Ghobadi, C. W., Kadom, N., Kansagra, A. P., … & Auffermann, W. F. (2018). Deep learning in radiology. Academic radiology, 25(11), 1472-1480.
  15. Mansour, R. F., Escorcia-Gutierrez, J., Gamarra, M., Díaz, V. G., Gupta, D., & Kumar, S. (2023). Artificial
  16. Özcan, H., Emiroğlu, B. G., Sabuncuoğlu, H., Özdoğan, S., Soyer, A., & Saygı, T. (2021). A comparative study for glioma classification using deep convolutional neural networks. Molecular Biology and Evolution.
  17. Lundervold, A. S., & Lundervold, A. (2019). An overview of deep learning in medical imaging focusing on MRI. Zeitschrift für Medizinische Physik, 29(2), 102-127.
  18. Tan, M., & Le, Q. (2019, May). Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (pp. 6105-6114). PMLR.
  19. Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708).
  20. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). intelligence with big data analytics-based brain intracranial hemorrhage e-diagnosis using CT images. Neural Computing and Applications, 35(22), 16037-16049.
  21. Nassar, S. E., Yasser, I., Amer, H. M., & Mohamed, M. A. (2024). A robust MRI-based brain tumor classification via a hybrid deep learning technique. The Journal of Supercomputing, 80(2), 2403-2427.
  22. Harangi, B. (2018). Skin lesion classification with ensembles of deep convolutional neural networks. Journal of biomedical informatics, 86, 25-32.
  23. Ting, F. F., Tan, Y. J., & Sim, K. S. (2019). Convolutional neural network improvement for breast cancer classification. Expert Systems with Applications, 120, 103-115.
  24. Hussain, E., Hasan, M., Rahman, M. A., Lee, I., Tamanna, T., & Parvez, M. Z. (2021). CoroDet: A deep learning based classification for COVID-19 detection using chest X-ray images. Chaos, Solitons & Fractals, 142, 110495.
  25. Wang, X., Lu, Y., Wang, Y., & Chen, W. B. (2018, July). Diabetic retinopathy stage classification using convolutional neural networks. In 2018 IEEE International Conference on Information Reuse and Integration (IRI) (pp. 465-471). IEEE.
  26. Jun, T. J., Nguyen, H. M., Kang, D., Kim, D., Kim, D., & Kim, Y. H. (2018). ECG arrhythmia classification using a 2-D convolutional neural network. arXiv preprint arXiv:1804.06812.
  27. Aamir, M., Rahman, Z., Dayo, Z. A., Abro, W. A., Uddin, M. I., Khan, I., … & Hu, Z. (2022). A deep learning approach for brain tumor classification using MRI images. Computers and Electrical Engineering, 101, 108105.
  28. Ahmad, I., Ullah, I., Khan, W. U., Ur Rehman, A., Adrees, M. S., Saleem, M. Q., … & Shafiq, M. (2021). Efficient algorithms for E-healthcare to solve multiobject fuse detection problem. Journal of Healthcare Engineering, 2021, 1-16.
  29. Ait Amou, M., Xia, K., Kamhi, S., & Mouhafid, M. (2022, March). A Novel MRI Diagnosis Method for Brain Tumor Classification Based on CNN and Bayesian Optimization. In Healthcare (Vol. 10, No. 3, p. 494). MDPI.
  30. Nazir, M., Khan, M. A., Saba, T., & Rehman, A. (2019, April). Brain tumor detection from MRI images using multi-level wavelets. In 2019 international conference on Computer and Information Sciences (ICCIS) (pp. 1-5). IEEE.
  31. Sharif, M. I., Khan, M. A., Alhussein, M., Aurangzeb, K., & Raza, M. (2021). A decision support system for multimodal brain tumor classification using deep learning. Complex & Intelligent Systems, 1-14.
  32. Vankdothu, R., Hameed, M. A., & Fatima, H. (2022). A brain tumor identification and classification using deep learning based on CNN-LSTM method. Computers and Electrical Engineering, 101, 107960.
  33. Abirami, S., & Venkatesan, G. P. (2022). Deep learning and spark architecture based intelligent brain tumor MRI image severity classification. Biomedical Signal Processing and Control, 76, 103644.
  34. Sajjad, M., Khan, S., Muhammad, K., Wu, W., Ullah, A., & Baik, S. W. (2019). Multi-grade brain tumor classification using deep CNN with extensive data augmentation. Journal of computational science, 30, 174-182.
  35. Ramtekkar, P. K., Pandey, A., & Pawar, M. K. (2023). Accurate detection of brain tumor using optimized feature selection based on deep learning techniques. Multimedia Tools and Applications, 1-31.
  36. Saeedi, S., Rezayi, S., Keshavarz, H., & R. Niakan Kalhori, S. (2023). MRI-based brain tumor detection using convolutional deep learning methods and chosen machine learning techniques. BMC Medical Informatics and Decision Making, 23(1), 16.
  37. Anagun, Y. (2023). Smart brain tumor diagnosis system utilizing deep convolutional neural networks. Multimedia Tools and Applications, 1-27.
  38. Arbane, M., Benlamri, R., Brik, Y., & Djerioui, M. (2021, February). Transfer learning for automatic brain tumor classification using MRI images. In 2020 2nd International Workshop on Human-Centric Smart Environments for Health and Well-being (IHSH) (pp. 210-214). IEEE.
  39. Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1251-1258).
  40. Chaturvedi, S. S., Tembhurne, J. V., & Diwan, T. (2020). A multi-class skin Cancer classification using deep convolutional neural networks. Multimedia Tools and Applications, 79(39-40), 28477-28498.
  41. Raza, A., Munir, K., & Almutairi, M. (2022). A novel deep learning approach for deepfake image detection. Applied Sciences, 12(19), 9820.
  42. Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. (2017, February). Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI conference on artificial intelligence (Vol. 31, No. 1).
  43. Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., … & Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
  44. Khan, Z. Y., & Niu, Z. (2021). CNN with depthwise separable convolutions and combined kernels for rating prediction. Expert Systems with Applications, 170, 114528.
  45. Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
  46. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.
  47. Gao, F., Li, B., Chen, L., Shang, Z., Wei, X., & He, C. (2021). A softmax classifier for high-precision classification of ultrasonic similar signals. Ultrasonics, 112, 106344.
  48. Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013, June). Rectifier nonlinearities improve neural network acoustic models. In Proc. icml (Vol. 30, No. 1, p. 3).

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

539 views

Metrics

PlumX

Altmetrics

GET OUR MONTHLY NEWSLETTER