
Systematic risk estimation is widely applied by investors and managers in order to predict risks in the market. One of the most applied measures of risk is the so-called Capital Asset Pricing Model, shortly CAPM. It has been studied empirically focusing on the impact of return interval on the betas. This paper lies in this topic and attempts to estimate the CAPM at different time scales for GCC markets by adapting a wavelet method to examine the relationship between the return of the stock and its systematic risk at different time scales. The main novelty is by applying non-uniform intervals of time. Differently from existing literature, we use random ones. The proposed procedure is acted empirically on a sample corresponding to Saudi Tadawul market as the most important GCC representative market actively traded over the period January 01, 2013 to September 20, 2018, which is characterized by many political, economic and financial movements such as Qatar embargo, Yemen war, NEOM project, 2030 KSA vision and the Arab spring effects. The findings in the present work may be good basis for understanding current and future GCC markets situation and may be thus a basis for investors' decisions in such markets.
Citation: Anouar Ben Mabrouk. Wavelet-based systematic risk estimation: application on GCC stock markets: the Saudi Arabia case[J]. Quantitative Finance and Economics, 2020, 4(4): 542-595. doi: 10.3934/QFE.2020026
[1] | Yurong Guan, Muhammad Aamir, Ziaur Rahman, Ammara Ali, Waheed Ahmed Abro, Zaheer Ahmed Dayo, Muhammad Shoaib Bhutta, Zhihua Hu . A framework for efficient brain tumor classification using MRI images. Mathematical Biosciences and Engineering, 2021, 18(5): 5790-5815. doi: 10.3934/mbe.2021292 |
[2] | Xiaobo Zhang, Donghai Zhai, Yan Yang, Yiling Zhang, Chunlin Wang . A novel semi-supervised multi-view clustering framework for screening Parkinson's disease. Mathematical Biosciences and Engineering, 2020, 17(4): 3395-3411. doi: 10.3934/mbe.2020192 |
[3] | Bakhtyar Ahmed Mohammed, Muzhir Shaban Al-Ani . An efficient approach to diagnose brain tumors through deep CNN. Mathematical Biosciences and Engineering, 2021, 18(1): 851-867. doi: 10.3934/mbe.2021045 |
[4] | Xiao Wang, Jianbiao Zhang, Ai Zhang, Jinchang Ren . TKRD: Trusted kernel rootkit detection for cybersecurity of VMs based on machine learning and memory forensic analysis. Mathematical Biosciences and Engineering, 2019, 16(4): 2650-2667. doi: 10.3934/mbe.2019132 |
[5] | Yufeng Qian . Exploration of machine algorithms based on deep learning model and feature extraction. Mathematical Biosciences and Engineering, 2021, 18(6): 7602-7618. doi: 10.3934/mbe.2021376 |
[6] | Tan Gao, Lan Zhao, Xudong Li, Wen Chen . Malware detection based on semi-supervised learning with malware visualization. Mathematical Biosciences and Engineering, 2021, 18(5): 5995-6011. doi: 10.3934/mbe.2021300 |
[7] | Jingren Niu, Qing Tan, Xiufen Zou, Suoqin Jin . Accurate prediction of glioma grades from radiomics using a multi-filter and multi-objective-based method. Mathematical Biosciences and Engineering, 2023, 20(2): 2890-2907. doi: 10.3934/mbe.2023136 |
[8] | Jian-xue Tian, Jue Zhang . Breast cancer diagnosis using feature extraction and boosted C5.0 decision tree algorithm with penalty factor. Mathematical Biosciences and Engineering, 2022, 19(3): 2193-2205. doi: 10.3934/mbe.2022102 |
[9] | Haifeng Song, Weiwei Yang, Songsong Dai, Haiyan Yuan . Multi-source remote sensing image classification based on two-channel densely connected convolutional networks. Mathematical Biosciences and Engineering, 2020, 17(6): 7353-7377. doi: 10.3934/mbe.2020376 |
[10] | Keyue Yan, Tengyue Li, João Alexandre Lobo Marques, Juntao Gao, Simon James Fong . A review on multimodal machine learning in medical diagnostics. Mathematical Biosciences and Engineering, 2023, 20(5): 8708-8726. doi: 10.3934/mbe.2023382 |
Systematic risk estimation is widely applied by investors and managers in order to predict risks in the market. One of the most applied measures of risk is the so-called Capital Asset Pricing Model, shortly CAPM. It has been studied empirically focusing on the impact of return interval on the betas. This paper lies in this topic and attempts to estimate the CAPM at different time scales for GCC markets by adapting a wavelet method to examine the relationship between the return of the stock and its systematic risk at different time scales. The main novelty is by applying non-uniform intervals of time. Differently from existing literature, we use random ones. The proposed procedure is acted empirically on a sample corresponding to Saudi Tadawul market as the most important GCC representative market actively traded over the period January 01, 2013 to September 20, 2018, which is characterized by many political, economic and financial movements such as Qatar embargo, Yemen war, NEOM project, 2030 KSA vision and the Arab spring effects. The findings in the present work may be good basis for understanding current and future GCC markets situation and may be thus a basis for investors' decisions in such markets.
A brain tumor is comprised of abnormal cells in central spinal canal or brain or intracranial hard neoplasms, which are either benign or malignant [1]. In 2019 in the United States [2], about 86,010 new cases of non-malignant and malignant brain tumor are estimated to be analyzed. There were 79,718 deaths recognized to malignant brain between 2012 and 2016 with annual average mortality rate of 4.42. The mortality rate in adults and childern due to brain tumor has increased.
The brain tumor subtypes classification is challenging based on several factors. The experts still facing challenging to improve the detection accuracy by developing the latest technology. Several approaches are required to identify the brain tumor. The brain tumor is one of the fatal forms of cancer among other cancer types having agammaessive nature, heterogenous characteristics, and low survival rate. Due to several factors such as type, location and texture properties, the brain tumor is categorized into different types (e.g. Meningioma, CNS Lymphoma, Glioma, Acoustic Neuroma, Pituitary etc.) [3]. The clinically rate of incident of meningioma, pituitary, and glioma among other brain tumor types is 15%, 15% and 45% respectively [4]. The patient survival can be predicted and diagnosed based on the tumor type through which they can decide the relevant treatment choice ranging from chemotherapy to radiotherapy. Thus in order to properly planning and monitoring the brain tumor, tumor grading is highly desired [5].
The glioma is the major tumor type which has further three types such as 1) Ependymomas, 2) Astrocytomas, and 3) Oligodendrogliomas. From the glial cells, it originates and surround nerve cells. According to the genetic features, it can be further identified which help the prediction of future treatment and behavior. The meningioma is another type of tumor, which originates in brain. It occur in women and grow slowly without any symptom [6]. The pituitary tumor type grows in the pituitary gland. The pituitary glands are benign and don't spread in the whole body [6].
The researchers recently employed many artificial intelligences-based machine learning methods to predict the tumor. The feature extraction is the most crucial part in the machine learning techniques for computing the most relevant features, which is still a challenging task for researchers. In order to the select and compute the most relevant feature is a tedious task, which require the prior knowledge about the domain of the problem. The morphological features to detect the brain tumor types can led easily to misclassification as different tumor types have similar resemblance. The extracted features are then fed as input to the different brain tumor type [7]. Recently, the researchers have computed different feature extraction methods including Elliptic Fourier descriptors (EFDs), texture, scale invariant Fourier transform (SIFT) and morphological features. Rathore et al. [8] used ensemble methods to detect the colon biopsy by computing hybrid features. Rathore et al. [9] also computed geometric features for prediction of colon cancer. Hussain et al. [10] extracted EFDs, SIFT, texture, entropy and morphological features to detect the prostate cancer. Moreover, Asim et al. [11] computed the hybrid features to detect the Alzheimer disease (AD). The graphical method is expensive, and computer aided diagnosis (CAD) methods could not properly capture the background knowledge regarding the morphological features as these methods are based merely on the texture properties. To properly detect the brain tumor with its location, the radiologists analyzed the image features which are dependent on their personal skills and expertise. The hand-crafted features are still a tedious and challenging task as selecting and computing more relevant features is still challenging.
In the past, researchers employed various machine learning (ML) algorithms by computing various features extracting approaches in the medical fields. The Gray level co-occurrence matrix (GLCM) and Berkeley wavelet transform (BWT) features were extracted by [12] to detect brain tumor. Moreover, Reboucas et al. [13] computed GLCM features to analyze the human tissue densities. Dhruv et al. [14] studied the GLCM and Haralick texture features for the analysis of 3D medical images. Hussain et al. [10] applied support vector machines (SVM) with its kernels to detect prostate cancer by extracting combination of feature extracting strategies. Zheng et al. [15] integrated the SVM and graph cuts for medical image segmentation. Taie and Ghonaim [16] applied Chicken Swarm Optimization (CSO) based algorithms alongwith SVM for brain tumor's disease diagnosis. Abd-Ellah et al. [17] used kernel SVM to classify the brain tumor MRIs. Alquran et al. [18] applied SVM to detect the melanoma skin cancer. Wang et al. [19] proposed stationary wavelet entropy (SWE) to extract brain image features. They obtained improved classification performance results by replacing wavelet entropy (WE), discrete wavelet transform (DWT) and wavelet energy (WN) with the proposed SWE. The SWE averaged the variants of DWT. Zhang et al. [20] computed the Hu moment invariant (HMI) features from a specific MR brain image and then fed these HMI features to generalized eigenvalue proximal SVM (GEPSVM) and twin support vector machine (TSVM). The proposed methods outperformed in detection of brain tumor.
In this study, we extracted traditional features such as entropy, morphological, texture, EFDs, SIFT and proposed new feature extraction approach based on the RICA features to classify a multi-class brain tumor types and applied ML techniques.
The Figure 1 shows the schematic diagram to detect the Multi-class brain tumor types (i.e. Meningioma, Glioma and Pituitary) by extracting RICA based features from Brain MRIs and applied ML techniques such as SVM with its kernels and LDA with 10-fold cross validation. After extracting the features, the MRI data was split into 70% for training and 30% for testing.
The brain tumor CE-MRI dataset used in this study were taken from the publicly available database provided by the School of Biomedical Engineering, Southern Medical University, Guangzhou, China (https://figshare.com/articles/dataset/brain_tumor_dataset/1512427). The data details are used in the previous studies of detailed in [21] brain tumor adaptive sparse pooling, [22] brain tumor via region augmentation proposed by Cheng et al. [21,22] which contains 3064 T1-weighted contrast-enhanced MRI images acquired from Nanfang Hospital and General Hospital, Tianjin Medical University, China from 2005. There are three types of brain tumor from 233 patients including glioma (1426 slices), meningioma (708 slices) and pituitary (930 slices). All images were acquired from 233 patients in three planes: axial (994 images), sagittal (1025 images) and coronal (1045) image plane. The data is labelled as meningioma with 1, glioma with 2 and pituitary tumor with 3. In MR images, the experienced radiologists have designated the suspicious regions of interest (ROIs). The dataset was originally provided in matlab. mat format where each file stores a struct with a label which specify the type of tumor for a particular patient ID, brain image, image data in 512 × 512 unit 16 formats, vector storing the coordinates of the discrete points on tumor border, and a binary mask image with 1 indicating the tumor region. The images have an in-plane resolution of 512 × 512 with pixel size 0.49 × 0.49 mm2. The thickness of slice is 6mm and gap of the slice is 1mm. Each patient contains approximately 1–6 images where most of patients have 1-3 images and very few patients have 4–6 images. The detail of CE-MRI data partitioning is detailed in section 2.4 and Table 1 below:
Tumor type | Number of Patients | Number of MR images | MRI view | Number of MR images |
Meningioma | 82 | 708 | Axial Coronal Sagittal |
209 268 231 |
Glioma | 89 | 1426 | Axial Coronal Sagittal |
494 437 495 |
Pituitary | 62 | 930 | Axial Coronal Sagittal |
291 319 320 |
Total | 233 | 3064 | 3064 |
In this study, we divided data into train and test based on patient-ID, where 70% of patients data was used for training and 30% for testing purpose for tumor type based on single slice assigned to each tumor type as performed in the previous studies Cheng et al. Abiwinanda et. al.[23], Cheng et al. [22], Sajjad et al. [24], Zia et al. [25], Badža and Barjaktarović [26], Gumaei et al. [27], Swati et al. [4], and Huang et al. [28]. In order to overcome the problem of overfitting, 10-fold cross-validation was also performed.
For improving the detection performance, the extraction of most relevant features is one of the most important steps. We extracting hybrid features as employed in our recent studies such as to detect prostate cancer by extracting combination of features [10], congestive heart failure with multimodal features [29], arrhythmia detection with hybrid features [30] proposed by Hussain et al. [10,29,30]. In this study, we computed traditional features based on morphological features and texture features, alongwith robust RICA features from multi-class brain tumor (pituitary, glioma and meningioma) and applied ML methods including SVM with its kernels and LDA. The RICA features based on their sparsity and robust to noise is more robust, and sigmoid nonlinearity imaging data. The brain tumor types are categorized into several factors such as type, location, texture of tumor. Thus, the traditional features may not provide detection performance better. On contrast, the RICA features seemed to be more appropriate to compute the multivariate information hidden in the brain tumor types. The traditional features extracted were of following categories:
Texture feature have effectively utilized in solving classification related issue [31] especially to classify colon biopsies by employing microscopic image analysis for feature identification [32], Fractal analysis [33] proposed by Esgiar et al. [32,33]. Texture features are obtained from Gray-level co-occurrence matrix (GLCM). GLCM covers the spatial relationship of the Gray-level in an image. Any entry (i, j) th in co- occurrence matrix explain the occurrence of Gray-level i and j, their relative orientation ʘ and their distance d. Commonly ʘ correspond in four direction (00, 450, 900, 1350). There are around 15 feature which obtained using GLCM which we studied as Angular second moment, Entropy, Correlation, Local Homogeneity, Shade, Variance, Average, Sum, Prominence, Difference Entropy, Sum Entropy, Difference variance Contrast, Sum variance and Information measure of correlation. The texture features extracted from brain tumor types are reflected in Table 2 below.
Features | Formulas | Description |
Contrast (t) | K∑x=1K∑y=1(x−y)2pxy | It is used to measure the contract between current pixel and its neighbor. |
Correlation (ρ) | K∑x=1K∑y=1(x−μx)(y−μy)pxyσxσy | It is used to measure the degree of correlation between current pixel and its neighbor. |
Dissimilarity (Dis) | K∑x=1K∑y=1|x−y|pxy | It is used to measure the difference in images. |
Entropy | K∑x=1K∑y=1pxy(−lnpxy) | It is used to get the encoded information from an image. |
Energy (n) | K∑x=1K∑y=1pxy2 | It is used to measure the uniformity of an image. |
Homogeneity (h) | K∑x=1K∑y=1pxy1+|x−y| | It is used to calculate the spatial closeness of elements in G to the diagonal of the matrix. |
Randomness (r) | −K∑x=1K∑y=1pxylog2pxy | It is used to measure the randomness of the elements of the GLCM. |
Mean (µ) | μx=K∑x=1K∑y=1x(pxy)μy=K∑x=1K∑y=1y(pxy) | This formula is used to calculate the sum of all values and P is the probability mass function. |
Variance (σ2) | σ2x=K∑x=1K∑y=1(pxy)(x−μx)2σ2y=K∑x=1K∑y=1(pxy)(y−μy)2 | This equation is used to measure how far a set of numbers is spread out from their mean. |
Standard Deviation (σ) | σx= √σ2x & σy = √σ2y | It is used to quantify the amount of dispersion of different values of a data set. |
These features can be computed from GLCM matrix G, where x and y represent indices of rows and columns of matrix G. pxy is the xyth term of matrix G divided by the sum of elements. The term μx and μy are the mean, σx and σy are the standard deviation of xth row and yth column of matrix G.
Morphology of skins plays vital part in deciding either the tissues are malignant or oppositely normal. Morphological features give an approach to change over the image morphology values. These features are obtained from images through changing the morphology of image within set of quantitative values utilizes in classification and they have extensively been utilized as a part of classification [34] segmentation [35]. Morphological feature module (FEM) taking input in form of the binary batch also finds associated factors in the clusters. Researchers in the past extracted few morphological features such as Perimeter (p), Eccentricity (y), Area (a), Convex Area (x), Euler Number (l), Orientation (e), Compactness (0), Length of major (m1), and Minor Axes (m2) etc. In this study, we computed the following morphological features as reflected in Table 3:
Features | Formulas | Description |
Area (A) | Total number of pixels in a region | Total count of pixels that a specific region of image contains |
Perimeter (P) | Pixels at the boundary of an image | Total count of pixels at the boundary of the image |
Solidity | AreaConvexArea | To calculate the density of an object, ratio between area and full convex object. |
Roundness | 4×Π×Area(ConvexPerimeter)2 | This equation is used to illustrate the difference between line and circle from other region of image. |
Convex Area | Total no of pixels in a convex image | It is used to count total no of pixels in convex image. |
Convexity | ConvexPerimeterPerimeter | This equation is used to calculate the perimeter ratio between object itself and convex full of object. |
Compactness | 4×Π×Area(Perimeter)2 | It is used to find the degree of deviation from a circle. This shows the ratio between the object areas with circle area. |
Maximum Radius (MaxR) | MAX(DISTANCE(C(x, y), BOUNDARY(x, y))) | This formula is used to calculate the maximum distance from boundary of the image to the center of the image, x and y are two points on the image. |
Minimum Radius (MINR) | MIN(DISTANCE(C(x, y), BOUNDARY(x, y))) | This formula is used to calculate the minimum distance from boundary of the image to the center of the image. |
Euler Number (EUL_NO) | No of objects in region – No of holes in these objects | This formula provides the difference between effected and unaffected area of an image. |
Standard Deviation | √1nn∑i=1(xi−−x)2 | It is used to calculate the contrast of an image. |
Entropy | ∑(p∗log2(p))2 | This equation shows the statistical measure which can be used to get the texture of the image. |
Eccentricity (ECT) | √(MAXR−MINRMAXR)2 | This formula represents the ratio of distance between major axis and ellipse focal. Value can be 0-1. |
Rectangularity | AreaMAXR−MINR | This formula is used to identify the similarity of image shape with rectangular shape. |
Elongation | 1−MINRMAXR | This formula is used to measure the length of the object. |
The RICA does not require any class label information because of its un-supervised nature. The ICA algorithm deficiencies were removed using RICA algorithm. The results yielded using RICA are more robust than ICA. This algorithm learns based on the sparse feature learning mechanism. The algorithm based on the sparse filter is capable to distinguish the various made natural signals, and these features can play a vital role in many of the ML techniques.
Consider an unlabeled data with input {y(i)}ni=1,y(i)∈Rm,the optimization problem of standard ICA using optimization algorithms [36], kernel sparse representation [37] for estimating ICA [36,37] mathematically defied as:
minX1n∑ni=1h(Xyi) | (2.1) |
Subjectto…XXU=I |
Where h(.) indicate nonlinear penalty function, X∈SLxm is a matrix, L represent number of vectors and I is the identity matrix. Additionally, XXU=I is used for avoiding the vectors in X to become degenerate. A smooth penalty function is used to handle this situation as indicated below:
h(.)=log(cosh(.)[38]. | (2.2) |
To completely learn the standard ICA, there are several orthonormality constraint which obstruct it. Subsequently, this drawback stops ICA from scaling to high dimensional data. To resolve this matter, the soft reconstruction cost is used in RICA. Thus, RICA after this replacement, can be characterized by following equation (2.3)
minXλn∑ni=1(||XUXyi−yi||22+∑ni=1∑lk=1h(Xkyi)) | (2.3) |
Here parameter λ > 0 shows the tradeoff between reconstruction error and sparsity.
The penalty h can produce sparse representations only, but not invariant [38]. Thus, RICA using efficient overcomplete feature learning algorithms [39], building low level features using feature learning [40] studied by V Le et al. [39,40] swapped it by an extra L2 pooling penalty, by promoting pooling features to cluster correlated features together. Furthermore, feature learning can be done using L2 pooling. L2 pooling using feature pooling [41], learning invariant features [42] studied by [41,42] is a two-layered network having square nonlinearity in the 1st layer (.)2 and square root nonlinearity in the 2nd layer √(.) as reflected in equation (2.4)
h(Xyi)=∑Lk=1√ε+Hk.((Xyi)⊙(Xyi)) | (2.4) |
Here Hk represents a row of spatial pooling matrix H ∈PL×L set to constant weights i.e., 1 for each element in matrix H, ⊙ represents the element wise multiplication and ε > 0 is a small constant.
The sparse representation of the actual data can be represented using RICA. The following steps are used to compute the RICA features.
The step-by-step procedure to compute the features using RICA algorithm is reflected in the Figure 2. The RICA feature model is obtained by applying RICA to the matrix of predictor data X containing p variables q number of features to extract from X. The RICA thus learns p by q matrix of transformation weights. The value of q can be less than or greater than the number of predictor variables to avoid from undercomplete or overcomplete feature representation. In this study, we choose q to 100 features and default values of alpha and gamma are set.
Vladimir Vapnik proposed SVM in 1979, which is a state of art algorithm used in different fields including medical diagnosis area [43], visual pattern recognition [44] and machine learning [45]. SVM is successfully used in many applications including text recognition, face expression recognition, emotion recognition, biometrics, and content-based image retrial etc. It constructs a hyperplane in the infinite dimensional space. The hyperplane helps to achieve the largest distance to any nearest training data point of any class. The lower generalization error can be obtained with the larger functional margin. To achieve this, SVM use the kernel trick. The linear and nonlinear separation with margin and slack variables in case of error examples are reflected in the Figure 3 (a, b) and Figure 4 (a, b).
Consider a hyperplane defined by x.w + b = 0, where w is its normal. The data is linearly separated and is labelled as:
{xi,yi},xiϵRNd,yiϵ{−1,1},i=1,2,……,N | (2.5) |
Here yi is the class label of two class SVM. To obtain the optimal boundary, the objective function is minimized with maximal margin i.e.E=‖w‖2 subject to
xi.w+b≥1foryi=+1 |
xi.w+b≤1foryi=−1 | (2.6) |
Combining these into set of inequalities as
(xi.b+b)yi≥1foralli |
Generally, the data is not linearly separable, in such cases a slack variable Ξi is used to indicate the amount of misclassification rate. Thus, new subjective function is then reformulated as:
E=12‖w‖2+C∑iL(Ξi) | (2.7) |
Subject to
(xi.b+b)yi≥1−ξiforalli |
The first term on the right-hand side is the regularization term which gives the SVM an ability to generalize the sparse data well. The points which lie outside the margin are represented by the second term denoted by the empirical risk. The cost function is denoted by L, and hyper parameter is denoted by C, which shows a trade-off effect by minimizing the empirical risk against maximizing the margin. Linear-error cost function is most used because of its ability to detect the outliers. The dual formulation with
L(Ξi)=Ξiis |
α∗=maxα(∑iαi+∑i,jαiαjyiyjxixj) | (2.8) |
Subject to
0≤αi≤Cand∑iαiyj=0 |
In which α={α1,α2,α3,......αi,} is a set of Lagrange multipliers of the constraints in the primal optimization problem. The optimal decision boundary is now given by.
w0=∑iαixiyi | (2.9) |
SVM for non-linearly separable data
The kernel function trick is recommended by the Muller et al. (2001) to deal the data with nonlinear separability. In this case the non-linear mapping from input space is made to higher dimensional feature space. The dot product between two vectors in the input space is expressed by dot product with some kernel functions in the feature space.
The Figure 5 reflects the SVM kernels parameter optimization settings. The kernel parameters, box constraints, polynomial order (1, 2, 3) were used according to the default settings. As shown in the above figure, in this research work three SVM kernels (Linear, Quadratic, cubic) are used for the classification of Brain Tumor. All three SVM classifiers are trained with 10-Fold Cross-validation and Kernel Scale auto. Box Constant parameter is used to control the overfitting problem. SVM is a binary classifier and to train on multi-class, Coding parameter oneVSone is used. In the oneVSone option, one class is treated as a positive, the other as a negative class, and all other classes are not used in training, this process repeated for all the class combinations.
The most used kernel functions are polynomial and radial base function (RBF). Mathematically, these are expressed as:
Types of Different Machine Learning Kernels with formulae
SVM Linear Kernel
K(xi,yi)=xi.yi+1 | (2.10) |
SVM Quadratic Kernel
K(xi,yi)=(xi.yi+1)2 | (2.11) |
SVM Cubic Kernel
K(xi,yi)=(xi.yi+1)3 | (2.12) |
Belhumeur in 1997 [46] proposed LDA as one of the classical algorithms in the field of pattern recognition and artificial intelligence (AI). The main functionality of this algorithm is to project the high dimensional samples into low dimensional space to achieve the effect of extracting classification information and to compress the feature space dimension. LDA is successfully been employed in many of the applications such as Pathak et al. [47] applied this algorithm for removing the redundancy and inconsistency in the data. Moreover, LDA can be used for classification and dimensionality reduction, we used LDA for multi-class classification.
LDA is a simple method of classification using the generative methodology. It assumes that a Gaussian distribution is possible for each class and that every class has the same matrix of covariance. The LDA is a linear classification method with these assumptions. If they are by chance supportive of the actual data distribution, LDA is optimal in that it converges to the classifier of Bayes, when the number of data tends to infinitely (the parameter estimates, therefore, correspond to the real distribution parameters). In fact, LDA needs few computations to approximate the parameters of the classifier that amount to the estimation of the percentages and means plus the inversion of the matrix.). The LDA takes the generative method when presuming that a Gaussian distribution with probability density function generates the data of each class. The probability density function of x in population πiis multivariate natural with mean variable μi and variance-covariance matrix. The formula for this usual function of probability density is:
pX|Y=y(x|Y=y)=1(2π)d2|Σy|12exp(−12(x−μy)TΣ−1y(x−μy)) | (2.13) |
And that the covariance matrix Σy for all labels is the same:
∀y∈Y,Σy=Σ | (2.14) |
They approximate the parameters as follows. The previous probabilities are essentially the data point fractions of each group:
∀y∈Y,P(Y=y)=NyN,withNy=N∑i=11yi=y | (2.15) |
The Gaussians' means are estimated by the means of the sample.
∀y∈Y,μk=1Ny∑yi=yxi | (2.16) |
And the matrix for covariance by
Σ=1N−|Y|∑y∈Y.∑yi=y(xi−μy)(xi−μy)T | (2.17) |
For training/testing data formulation, the Jack-knife 10-fold cross validation (CV) was used. The performance was evaluated using the similar metrics to detect brain tumor by applying adaptive spatial pooling methods [21], margin information and learning distance metric [48], bag-of-visual word representation methods [49], spatial layout information based methods [50] as employed and tested by [21,48,49,50], and CE-MRI data of 233 patients was randomly divided into 10 subsets of equal size. We also ensured that there is no overlap and equal ratios of the different type of tumors in the 10 subsets for the CE-MRI datasets. The division according to the patients ensure that images from same patient did not exist simultaneously in the training and testing set. Using 10-fold cross validation, the data is partitioned into 10 folds and 9 folds participate in training and remaining folds in testing. The samples in the test fold are purely unseen. The entire process is repeated 10 times.
K-fold Cross-validation is an effective preventative measure against overfitting. Thus, to tune the model, the dataset is split into multiple train-test bins. Using k-fold CV, the dataset is divided into k-folds. For model training, k-1 folds are involved, and rest of the folds are used for model testing. Moreover, k-fold method is helpful for fine-tuning the hyperparameters with the given original training dataset in order to determine that how the outcome of ML model could be generalized. The k-fold cross validation procedure is reflected in Figure 6 below.
The researchers are devising automated tools to improve the prediction of brain tumor types because of the multivariate characteristics of the tumor types. Extracting the most relevant and appropriate feature is still a challenging task. In this study, we first extracted the traditional texture and morphological features from brain tumor types and computed the performance using the machine learning classification techniques such as LDA, SVM with linear, quadratic, cubic and cosine kernels. We then extracted the RICA based features based on the multivariate characteristics. These features are then used as input to these classifiers for multi-class approach. The results reveal that proposed feature extraction approach using SVM cubic yielded more appropriate results to predict the tumor types.
Table 4 shows the results of AI multiclass brain tumor types (Glioma, meningioma, pituitary) classification of texture and morphological features. The classifiers LDA and SVM with its kernel yielded moderate performance. Specifically, SVM quadratic classifiers yielded best performance with accuracy (93.11%), AUC (0.8928) followed by SVM cubic with accuracy (93.04%) and AUC (0.8895) to predict the pituitary from multiclass. The other performance metrics are reflected in the Table 3.
Class | Sens. | Spec. | PPV | NPV | FPR | Acc. | AUC |
LDA | |||||||
Glioma | 100% | 16.09% | 47.79% | 100% | 0.839 | 52.54% | 0.5804 |
meningioma | 100% | 100% | 0 | 100% | |||
Pituitary | 39.78% | 100% | 100% | 94.31% | 0 | 94.52% | 0.6989 |
SVM Linear | |||||||
Glioma | 100% | 54.42% | 60.67% | 100% | 0.455 | 73.24% | 0.7720 |
meningioma | 100% | 100% | 0 | 100% | |||
Pituitary | 75.48% | 100% | 100% | 89.67% | 0 | 92.16% | 0.8774 |
SVM Quadratic | |||||||
Glioma | 100% | 62.30% | 63.95% | 100% | 0.376 | 77.42% | 0.8115 |
meningioma | 52.28% | 99.79% | 93.02% | 97.54% | 0.0020 | 97.42% | 0.7604 |
Pituitary | 78.57% | 100% | 100% | 90.78% | 0 | 93.11% | 0.8928 |
SVM Cubic | |||||||
Glioma | 100% | 71.25% | 64.91% | 100% | 0.2875 | 81.23% | 0.8562 |
meningioma | 46.80% | 98.56% | 83.89% | 92.04% | 0.014 | 91.41% | 0.7268 |
Pituitary | 77.90% | 100% | 100% | 90.79% | 0 | 93.04% | 0.8895 |
SVM Cosine | |||||||
Glioma | 100% | 75.50% | 67.42% | 100% | 0.2449 | 83.74% | 0.8775 |
meningioma | 47.61% | 99.29% | 90.45% | 93.08% | 0.007 | 92.92% | 0.7345 |
Pituitary | 71.79% | 100% | 100% | 85.71% | 0 | 89.95% | 0.8589 |
Table 5 reflect the multi-class classification results of brain tumor types (meningioma, Glioma, pituitary) based on the RICA features. The classifiers LDA and SVM with its kernel yielded highest performance. Specifically, SVM cubic classifiers yielded best performance with accuracy (99.34%), AUC (0.9892) followed by SVM quadratic with accuracy (98.10%) and AUC (0.9699) to predict the pituitary from multiclass. To predict the meningioma from multiclass, SVM cubic yielded an accuracy (96.96%), AUC (0.9348) and to predict glioma from multiclass and accuracy (95.88%), AUC (0.9635) was obtained. The highest multi-class prediction with other classifiers was obtained by LDA followed by SVM linear, and SVM cosine.
Class | Sensitivity | Specificity | PPV | NPV | FPR | Accuracy | AUC |
LDA | |||||||
Glioma | 100% | 82.38% | 78.95% | 100% | 0.1761 | 89.39% | 0.9119 |
meningioma | 69.18% | 99.32% | 95.66% | 93.75% | 0.0067 | 93.99% | 0.8425 |
Pituitary | 89.07 | 100% | 100% | 95.24% | 0 | 96.57% | 0.9453 |
SVM Linear | |||||||
Glioma | 100% | 84.64% | 81.45% | 100% | 0.153 | 90.83% | 0.9232 |
meningioma | 71.50% | 99.60% | 97.46% | 94.26% | 0.004 | 94.68% | 0.8555 |
Pituitary | 88.32% | 100% | 100% | 94.63% | 0 | 96.18% | 0.9416 |
SVM Quadratic | |||||||
Glioma | 100% | 91.18% | 89.53% | 100% | 0.088 | 94.97% | 0.9559 |
meningioma | 84.51% | 99.63% | 98.31% | 96.20% | 0.0036 | 96.57% | 0.9207 |
Pituitary | 93.89% | 100% | 100% | 97.31% | 0 | 98.10% | 0.9699 |
SVM Cubic | |||||||
Glioma | 100% | 92.27% | 91.38% | 100% | 0.072 | 95.88% | 0.9635 |
meningioma | 87.34% | 99.62% | 98.47% | 96.60% | 0.0038 | 96.96% | 0.9348 |
Pituitary | 97.78% | 100% | 100% | 99.07% | 0 | 99.34% | 0.9892 |
SVM Cosine | |||||||
Glioma | 100% | 87.51% | 84.01% | 100% | 0.1248 | 92.46% | 0.9375 |
meningioma | 72.98% | 99.76% | 98.66% | 93.72% | 0.0024 | 94.45% | 0.8636 |
Pituitary | 87.27% | 100% | 100% | 94.14% | 0 | 95.58% | 0.9336 |
The Figure 7 (a–e) reflects the Multi-class distribution of glioma (1426 slices), meningioma (708 slices) and pituitary (930 slices). From figure 7 (d) using SVM cubic, out of 1426 glioma, there were 1337 were predicted a glioma, 113 as meningioma and 13 as pituitary. From 708 meningioma, after prediction, there were 84 predicted as glioma, 580 as meningioma and 9 as pituitary. From 930 pituitary, there were 5 predicted as glioma, 15 as meningioma and 908 as pituitary. The distribution using other classifiers is reflected in the Figure 7 (a–e).
The researchers extracted various features extraction approaches using ML and DL methods to detect the binary class classification of brain tumor types. The highest performance based on the overall accuracy was obtained by [22] 91.28%, [51] 90.89%, [52] 86.56%, and [53] 84.19%. With the multi-class classification, the LDA yielded accuracy for pituitary (96.48%), meningioma (93.89%) and glioma (89.39%). Using SVM linear, the accuracy to detect pituitary was yielded (96.28%), an accuracy (94.45%) was obtained to detect meningioma, while to detect the glioma, and accuracy (90.76%) was yielded. By employing the quadratic kernel, the highest detection was obtained to detect pituitary with accuracy (98.07%), followed by accuracy (96.18%) to predict meningioma and accuracy (94.35%) to detect glioma.
The Figure 8 (a–c) shows the multi-class separation in the form of the area under the receiver operating curve based on texture + morphological features extracted and employing machine learning techniques. The highest separation was obtained with AUC (0.8928) to detect pituitary using SVM quadratic followed by AUC (0.8895) to detect pituitary using SVM cubic.
The Figure 9 (a-c) reflect the Multi-class separation to distinguish a) Glioma, b) Meningioma, and c) Pituitary by computing RICA features and utilizing robust machine learning techniques. To detect the Glioma, the separation with AUC was obtained using LDA (0.9119), SVM Linear (0.9232), SVM quadratic (0.9559), SVM cubic (0.9635), and SVM cosine (0.9375). To detect the meningioma, the separation with AUC was obtained using LDA (0.8425), SVM Linear (0.8555), SVM quadratic (0.9207), SVM cubic (0.9348), and SVM cosine (0.8636). To detect the pituitary, the separation with AUC was obtained using LDA (0.9453), SVM Linear (0.9416), SVM quadratic (0.9699), SVM cubic (0.9892), and SVM cosine (0.9336).
The Table 6 presents the findings of different hand-crafted features techniques alongwith machine learning techniques to classify the brain tumor from normal and between brain tumor types using similar dataset and different datasets. Using LDA, the highest detection performance was obtained to detect pituitary with accuracy (96.57%), AUC (0.9453) followed by meningioma and glioma. Using SVM linear kernel, the highest detection performance was obtained to detect pituitary with accuracy (96.18%), AUC (0.9416). Using SVM quadratic kernel, the highest detection performance was obtained to detect pituitary with accuracy (98.10%) and AUC (0.9699). Likewise, using SVM cubic, the highest detection performance to detect the pituitary was obtained with accuracy (99.34%), AUC (0.9892). Moreover, using SVM cosine, to detect pituitary an accuracy (95.58%) and AUC (0.9336) was yielded.
To extract the diagnostic information from MR images, researchers employed several image analysis techniques using tissue characterization methods [57], texture text objects and intracranial brain tumor detection [58], and tissue characterization and intracranial brain tumor detection [59] detailed in [57,58,59]. The texture analysis and pattern recognition techniques were employed in these studies to characterize the types of brain tumor. Recently, [60] employed SVM to classify the gliomas and meningiomas and obtained 95% overall accuracy to distinguish these types. Moreover, [57] employed k-nearest neighbor and discriminant analysis to distinguish between oedematous and brain tumor tissues by achieving a maximum accuracy of 95%. Recently, several studies applied MR spectroscopic features such as long echo proton MRs signals [61], short echo time [62], tumor grading [63], short time multicenter study [64], and short echo metabolic patterns [65] as described in [61,62,63,64,65] or combination of spectroscopic and texture features to distinguish between various brain tumor types by achieving a maximum accuracy of 99% [64]. Moreover, authors with benchmark with similar dataset [37] extracted hand-crafted features by applying machine learning techniques and deep convolutional neural network methods obtained performance in terms of overall accuracy [7] 98%, [54] 96.4%, [66] 80%, [52] 86.56%, [25] 85.69%, [22] 91.28%, [55] 94.2%, [53] 91.43%, and [56] 96.67%. In the present study, we used MRI brain tumor types dataset originally provided by Cheng et al. which is used in his studies [21,22]. We compared the results with similar dataset used by other researchers such as Abiwinanda et. al. [23], Cheng et al. [22], Sajjad et al. [24], Zia et al. [25], Badža and Barjaktarović [26], Gumaei et al. [27], Swati et al. [4], and Huang et al. [28] as reflected in Table 6.
Author | Feature/Methods | Performance |
Machhale et al. [7] | SVM-KNN | Sensitivity: 100% Specificity: 93.75% Accuracy: 98% |
Zacharaki et al. [54] | Cross-Validation Using different Classifiers (LDA, k-NN, SVM) | Sensitivity: 75% Specificity: 100% Accuracy: 96.4% |
Badža and Barjaktarović [26] | CNN | Accuracy 95.40% |
Gumaei et al. [27] | Regularized extreme learning machine (RELM) | Accuracy 94.23% |
Swati et al. [4] | Automatic content-based image retrieval (CBIR) system | Average precision 96.13% |
Huang et al. [28] | convolutional neural network based on complex networks (CNNBCN) | Accuracy 95.49% |
Afshar et al. [52] | Capsule Network Method | Accuracy: 86.56% |
Zia et al. [25] | Window Based Image Cropping | Sensitivity:86.26% Specificity:90.90% Accuracy: 85.69% |
Sajjad et al. [24] | CNN with data augmentation | Sensitivity:88.41% Secificity:96.12% Accuracy: 94.58% |
Cheng et al. [22] | Feature extraction methods: Intensity Histogram GLCM BOW Classification Methods: SVM SRC KNN |
Accuracy:91.28% |
Abiwinanda et. al. [23] | CNN | Accuracy: 84.19% |
Anaraki et al. [55] | Genetic Algorithms | Accuracy: 94.2% |
Paul et al. [53] | NN | Accuracy: 91.43% |
Sachdeva et al. [56] | Segmentation and Feature extraction | Highest accuracy 96.67% |
This work | RICA Based Features SVM Cubic with Multiclass classification 1) Pituitary 2) Meningioma 3) Glioma |
1) Accuracy: 99.34%, AUC: 0.9892 2) Accuracy: 96.96%, AUC: 0.9348 3) Accuracy: 95.88%, AUC: 0.9635 |
The authors who used the similar database includes Abiwinanda et. al., Sajjad et al., Anaraki et al., Cheng et al., Swati et al., and Gumaei et al. to predict the brain tumor types such as Glioma, Meningioma and Pituitary. Abiwinanda et. al. [23] trained the CNN to predict the three most common types of brain tumor i.e. Glioma, Meningioma and Pituitary. They implemented the simple CNN architecture i.e., max-pooling, convolution, and flattening layers followed by a full connection from one hidden layer. The CNN was trained on similar dataset consisting of 3064 T-1 weighted CE-MRI images publicly available Cheng et al. [22] yielded a training accuracy of 98.51% and validation accuracy of 84.19% at best. The results are compared with similar dataset by employing the region-based segmentation algorithms yielded accuracies ranged between 71.39% to 94.68%. Sajjad et al. [24] applied CNN with and without data augmentation methods to detect the brain tumor types such as Glioma, Meningioma and Pituitary. With the original dataset, the highest performance was obtained with sensitivity (88.41%), specificity (96.12%) and accuracy (94.58%). Anaraki et al. [55] applied CNN and genetic algorithms to classify the MRI brain tumor grades types. The highest classification accuracy of 94.2% was yielded to classify brain tumor types such as Glioma, Meningioma and Pituitary tumor with improved results as computed by Paul et al. by employing Vanilla preprocessing with shallow CNN to distinguish the Glioma, Meningioma and Pituitary tumor types. Cheng et al. [22] classified the three brain tumor types such as Glioma, Meningioma and Pituitary. The classification performance was evaluated with three feature extraction methods namely gray level co-occurrence matrix (GLCM), intensity histogram and bag-of-words model Enhanced Performance of Brain Tumor Classification via Tumor Region Augmentation and Partition. The improved performacne are reflected in Table 6.
In many imaging pathologies, the texture properties along with morphological imaging features played a vital role in prediction. This may be since most of these pathologies may contain the hidden information can be best extracted from these texture and shape properties. Due to the heterogenous characteristics, agammaessive nature and involvement of several factors, the brain tumor is categorized into different types (i.e. glioma, meningioma and pituitary etc.). Researchers are developing various automated tools to improve the prediction. The results yielded by extracting texture and morphological features reveal that some machine learning algorithms provided higher sensitivity while some other provided higher specificity. It can be inferred that these features still cannot be better fit to better predict the brain tumor types based on these heterogenous characteristics. While extracting RICA features improved both specificity and sensitivity substantially using SVM quadratic and cubic kernels. Thus, RICA feature characteristics may better tailor to distinguish these multiclass brain tumor types and hence improved the prediction performance.
In this study, we used the RICA based advanced feature extraction methods from MRI scans of multi-class brain tumor types of patients. The brain tumor types properly classification is of much significance to correctly treat the brain tumor. The proposed multiclass approach yielded the highest detection rate to detect pituitary followed by meningioma and glioma type. The results revealed that proposed approach based on RICA features from brain tumor types of MRIs will be very helpful for early detection of tumor type and to treat the patients to improve the survival rate.
In this study, we used multi-class classification between few brain tumors types. The data is lacking the description of distribution of each type of patient, which we will address in future. In future, we will also extend the work with other types of brain tumor and larger datasets along with more feature extraction methods. We will also employ this model for other type of medical images such as ultrasonography (ultrasound), radiography (X-ray), dermoscopic, endoscopic and histology images along with demographic information and tumor staging. Machine learning based on the feature extraction approach is hot topic of research due to less computational time as compared to the deep learning which require more computational resources. The researchers are developing different feature extraction approaches in order to improve the detection performance. We will extract more relevant features for further improving the machine learning (i.e. non-deep learning) classification results. We will also compute and compare the results of Machine learning methods using feature extraction approach with the deep convolutional neural network methods with optimization of parameters.
The authors declare that they have no conflict of interest.
Not Applicable Data were obtained from a publicly available, deidentified dataset. For this type of study formal consent is not required https://github.com/chengjun583/brainTumorRetrieval
[1] | Aktan B, Ozturk M, Rhaeim N, et al. (2009) Wavelet-Based Systematic Risk Estimation An Application on Istanbul Stock Exchange. Int Res J Financ Econ 23: 34-45. |
[2] | Arfaoui S, Rezgui I, Mabrouk AB (2017) Wavelet Analysis On The Sphere, Spheroidal Wavelets, Degryuter, Degruyter, 2017, ISBN 978-3-11-048188-4. |
[3] | Arfaoui S, Mabrouk AB, Cattani C (2020) New type of Gegenbauer-Hermite monogenic polynomials and associated Clifford wavelets. J Math Imaging Vision 62: 73-97. |
[4] | Arfaoui S, Mabrouk AB, Cattani C (2020) New type of Gegenbauer-Jacobi-Hermite monogenic polynomials and associated continuous Clifford wavelet transform. Acta Applicandea Math. https://doi.org/10.1007/s10440-020-00322-0. |
[5] | Aydogan K (1989) Portfolio Theory, Capital Market Board of Turkey Research Report: Ankara. |
[6] | Banz RW (1981) The relationship between return and market value of common stock. J Financ Econ 9: 3-18. |
[7] | Basu S (1977) The relationship between earnings' yield, market value and return for NYSE common stocks-further evidence. J Financ 32: 663-681. |
[8] | Mabrouk AB, Mohamed MLB, Omrani K (2008) Numerical solutions for PDEs modeling binary alloy-solidification dynamics, In: Proceedings of 2007 International Symposium on Nonlinear Dynamics, J Phys, conference series 96 (2008) 012067. |
[9] | Mabrouk AB, Kortass H, Ammou SB (2008) Wavelet Estimators for Long Memory in Stock Markets. Int J Theor Appl Financ 12: 297-317. |
[10] | Mabrouk AB, Kahloul I, Hallara SE (2010) Wavelet-Based Prediction for Governance, Diversification and Value Creation Variables. Int Res J Financ Econ 60: 15-28. |
[11] | Mabrouk AB, Abdallah NB, Hamrita ME (2011) A wavelet method coupled with quasi self similar stochastic processes for time series approximation. Int J Wavelets Multiresolution Inf Process 9: 685-711. |
[12] | Mabrouk AB, Zaafrane O (2013) Wavelet Fuzzy Hybrid Model For Physico Financial Signals. J Appl Stat 40: 1453-1463. |
[13] | Mabrouk AB, Rabbouch B, Saadaoui F (2015) A wavelet based methodology for predicting transmembrane segments, In: Poster Session, The International Conference of Engineering Sciences for Biology and Medecine, 1-3 May 2015, Monastir, Tunisie. |
[14] | Black F, Jensen MC, Scholes M(1972) The capital asset pricing model: Some empirical tests, in Jensen MC (ed), Studies in the theory of Capital, New York: Praeger, 1-54. |
[15] | Black F (1972) Capital Market Equilibrium with Restricted Borrowing. J Bus 45: 444-455. |
[16] | Breeden D (1979) An Intertemporal Asset Pricing Model with Stochastic Consumption and Investment Opportunities. J Financ Econ 73: 265-296. |
[17] | Brennan MJ (1973) Taxes, market valuation and corporate financial policy. Natl tax J 23: 417-427. |
[18] | Chae J, Yang C (2008) Which idiosyncratic factors can explain the pricing errors from asset pricing models in the Korean stock market? Asia-Pasific J Financ Stud 37: 297-342. |
[19] | Chan LK, Lakonishok J (1993) Are the reports of beta's death premature? J Portf Manage 19: 51-62. |
[20] | Cifter A, Ozun A (2007) Multiscale systematic risk: An application on ISE 30, MPRA Paper 2484, University Library of Munich: Germany. |
[21] | Cifter A, Ozun A (2008) A signal processing model for time series analysis: The effect of international F/X markets on domestic currencies using wavelet networks. Int Rev Electr Eng 3: 580-591. |
[22] | Cohen K, Hawawin G, Mayer S, et al. (1986) The Microstructure of Securities Markets, Prentice-Hall: Sydney. |
[23] | Conlon T, Crane M, Ruskin HJ (2008) Wavelet multiscale analysis for hedge funds: Scaling and strategies. Phys A 387: 5197-5204. |
[24] | Daubechies I (1992) Ten Lectures on Wavelets, Society for Industrial and Applied Mathematics, Philadelphia. |
[25] | Desmoulins-Lebeault F (2003) Distribution of Returns and the CAPM Empirical Problems. Post-Print halshs-00165099, HAL. |
[26] | DiSario R, Saroglu H, McCarthy J, et al. (2008) Long memory in the volatility of an emerging equity market: The case of Turkey. Int Mark Inst Money 18: 305-312. |
[27] | Fama EF (1970) Efficient capital markets: A review of theory and empirical work. J Financ 25: 383-417. |
[28] | Fama E, French K (1992) The Cross-section of Expected Stock Returns. J Financ 47: 427-465. |
[29] | Fama E, French K (1993) Common risk factors in returns on stocks and bonds. J Financ Econ 33: 3-56. |
[30] | Fama E, French KR (1996) The CAPM is Wanted, Dead or Alive. J Financ 51: 1947-1958. |
[31] | Fama E, French KR (2004) The Capital Asset Pricing Model: Theory and Evidence. J Econo Perspect 18: 25-46. |
[32] | Fama E, French KR (2006) The Value Premium and the CAPM. J Financ 61: 2163-2185. |
[33] | Fama E, MacBeth J (1973) Risk, return and equilibrium: Empirical tests. J Polit Econ 81: 607-636. |
[34] | Fernandez V (2006) The CAPM and value at risk at different time-scales. Int Rev Financ Anal 15: 203-219. |
[35] | Friend L, Landskroner Y, Losq E (1976) The demand for risky assets and uncertain inflation. J Financ 31: 1287-1297. |
[36] | Galagedera DUA (2007) A review of capital asset pricing models. Managerial Financ 33: 821-832. |
[37] | Gençay R, Selçuk F, Whitcher B (2002) An Introduction to Wavelets and Other Filtering Methods in Finance and Economics, Academic Press, San Diego. |
[38] | Gençay R, Whitcher B, Selçuk F (2003) Systematic Risk and Time Scales. Quant Financ 3: 108-116. |
[39] | Gençay R, Whitcher B, Selçuk F (2005) Multiscale systematic risk. J Inte Money Financ 24: 55-70. |
[40] | Gibbons MR (1982) Multivariate tests of financial models: A new approach. J Financ Econ 10: 3-27. |
[41] | Gursoy CT, Rejepova G (2007) Test of capital asset pricing model in Turkey. J Dogus Univ 8: 47-58. |
[42] | Handa P, Kothari SP, Wasley C (1989) The relation between the return interval and beta: Implications for size-effect. J Financ Econ 23: 79-100. |
[43] | Handa P, Kothari SP, Wasley C (1993) Sensitivity of multivariate tests of the CAPM to the return measurement interval. J Financ 48: 1543-1551. |
[44] | Ho YW, Strange R, Piesse J (2000) CAPM anomalies and the pricing of equity: Evidence from the Hong Kong market. Appl Econ 32: 1629-1636. |
[45] | Hubbard BB (1998) The world according to wavelets: The story of a mathematical technique in the making, 2e, Ak Peters Ltd., MA. |
[46] | Mahmoud IMM, Ben Mabrouk A, Hashim MHA (2016) Wavelet multifractal models for transmembrane proteins' series. Int J Wavelets Multires Inf Process 14: 36. |
[47] | In F, Kim S (2006) The hedge ratio and the empirical relationship between the stock and futures markets: A new approach using wavelet analysis. J Bus 79: 799-820. |
[48] | In F, Kim S (2007) A note on the relationship between Fama-French risk factors and innovations of ICAPM state variables. Financ Res Lett 4: 165-171. |
[49] | In F, Kim S, Marisetty V, et al. (2008) Analysing the performance of managed funds using the wavelet multiscaling method. Rev Quant Financ Accounting 31: 55-70. |
[50] | Karan MB, Karadagli E (2001) Risk return and market equilibrium in Istanbul stock exchange: The test of the capital asset pricing model. J Econ Administrative Sci 19: 165-177. |
[51] | Kishor NK, Marfatia HA (2013) The time-varying response of foreign stock markets to US monetary policy surprises: Evidence from the Federal funds futures market. J Int Financ Mark Inst Money 24: 1-24. |
[52] | Kothari S, Shanken J (1998) On defense of beta, J. Stern, and D. Chew, Jr. (Eds.), The Revolution in Corporate Finance, 3e, Blackwell Publishers Inc., 52-57. |
[53] | Levhari D, Levy H (1977) The Capital Asset Pricing Model and the Investment Horizon. Rev Econ Stat 59: 92-104. |
[54] | Lévy H (1978) Equilibrium in an imperfect market: a constraint on the number of securities in the portfolio. Am Econ Rev 68: 643-658. |
[55] | Lintner J (1965a) Security Prices and Maximal Gaines from Diversification. J Financ 20: 587-615. |
[56] | Lintner J (1965b) The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets. Rev Econ Stat 47: 13-37. |
[57] | Litzenberger RH, Ramaswamy K (1979) The effect of personal taxes and dividends on capital asset prices: Theory and empirical evidence. J Financ Econ 7: 163-195. |
[58] | Magni CA (2007a) Project selection and equivalent CAPM-based investment criteria. Appl Financ Econ Lett 3: 165-168. |
[59] | Magni CA (2007b) Project valuation and investment decisions: CAPM versus arbitrage. Appl Financ Econ Lett 3: 137-140. |
[60] | Marfatia HA (2014) Impact of uncertainty on high frequency response of the US stock markets to the Fed's policy surprises. Q Rev Econ Financ 54: 382-392. |
[61] | Marfatia HA (2015) Monetary policy's time-varying impact on the US bond markets: Role of financial stress and risks. North Am J Econ Financ 34: 103-123. |
[62] | Marfatia HA (2017a) A fresh look at integration of risks in the international stock markets: A wavelet approach. Rev Financ Econ 34: 33-49. |
[63] | Marfatia HA (2017b) Wavelet Linkages of Global Housing Markets and macroeconomy. Available at SSRN 3169424. |
[64] | Marfatia HA (2020) Investors' Risk Perceptions in the US and Global Stock Market Integration. Res Int Bus Financ 52: 101169. |
[65] | Markowitz H (1952) Portfolio Selection. J Financ 7: 77-91. |
[66] | Merton RC (1973) An Intertermporal Capital Asset Pricing Model. Econometrica: J Econometric Society 41: 867-887. |
[67] | Mossin I (1966) Equilibrium in a Capital Asset Market. Econometrica: J Econometric Society 34: 768-783. |
[68] | Perold A (2004) The Capital Asset Pricing Model. J Econ Perspect 18: 3-24. |
[69] | Percival DB, Walden AT (2000) Wavelet methods for time series analysis, Camridge University Press, NY. |
[70] | Rhaiem R, Ammou SB, Mabrouk AB (2007a) Estimation of the systematic risk at different time scales: Application to French stock market. Int J Appl Econ Financ 1: 79-87. |
[71] | Rhaiem R, Ammou SB, Mabrouk AB (2007b) Wavelet estimation of systematic risk at different time scales, Application to French stock markets. Int J Appl Econ Financ 1: 113-119. |
[72] | Roll R (1977) A critique of the asset pricing theory's tests Part I: On past and potential testability of the theory. J Financ Econ 4: 129-176. |
[73] | Selcuk F (2005) Wavelets: A new analysis method (in Turkish). Bilkent J 3: 12-14. |
[74] | Sharkasi A, Crane M, Ruskin HJ, et al.(2006) The reaction of stock markets to crashes and events: A comparison study between emerging and mature markets using wavelet transforms. Phys A 368: 511-521. |
[75] | Sharpe WF (1964) Capital asset prices: A theory of market equilibrium under conditions of risk. J Financ 19: 425-442. |
[76] | Sharpe WF (1970a) Computer-Assisted Economics. J Financ Quant Anal, 353-366. |
[77] | Sharpe WF (1970b) Stock market price behavior. A discussion. J Financ 25: 418-420. |
[78] | Sharpe WF (1970c) Portfolio theory and capital markets, McGraw-Hill College. |
[79] | Soltani S (2002) On the use of the wavelet decomposition for time series prediction. Neurocomput 48: 267-277. |
[80] | Soltani S, Modarres R, Eslamian SS (2007) The use of time series modeling for the determination of rainfall climates of Iran. Int J Climatol 27: 819-829. |
[81] | Vasichek AA, McQuown JA (1972) Le modèle de marché efficace. Analyse financière, 15, 1973, traduit de "The effecient market model". Financ Anal J. |
[82] | Xiong X, Zhang X, Zhang W, et al. (2005) Wavelet-based beta estimation of China stock market, In: Proceedings of 4th International Conference on Machine Learning and Cybernetic, Guangzhou. IEEE: 0-7803-9091-1. |
[83] | Yamada H (2005) Wavelet-based beta estimation and Japanese industrial stock prices. Appl Econ Lett 12: 85-88. |
[84] | Zemni M, Jallouli M, Mabrouk AB, et al. (2019a) Explicit Haar-Schauder multiwavelet filters and algorithms. Part II: Relative entropy-based estimation for optimal modeling of biomedical signals. Int J Wavelets Multiresolution Inf Process 17: 1950038. |
[85] | Zemni M, Jallouli M, Mabrouk AB, et al. (2019b) ECG Signal Processing with Haar-Schauder Multiwavelet, In: Proceedings of the 9th International Conference on Information Systems and Technologies—Icist 2019. |
![]() |
![]() |
1. | Lal Hussain, Areej A. Malibari, Jaber S. Alzahrani, Mohamed Alamgeer, Marwa Obayya, Fahd N. Al-Wesabi, Heba Mohsen, Manar Ahmed Hamza, Bayesian dynamic profiling and optimization of important ranked energy from gray level co-occurrence (GLCM) features for empirical analysis of brain MRI, 2022, 12, 2045-2322, 10.1038/s41598-022-19563-0 | |
2. | Necip Cinar, Mehmet Kaya, Buket Kaya, A novel convolutional neural network‐based approach for brain tumor classification using magnetic resonance images, 2022, 0899-9457, 10.1002/ima.22839 | |
3. | Sara Ali Al Hussen, Elham Mohammed Thabit A. Alsaadi, 2023, Machine Learning for Detection and Classification of Human Brain Tumor: A Survey, 979-8-3503-3511-8, 122, 10.1109/ICITAMS57610.2023.10525497 | |
4. | Liangyu Li, Jing Yang, Lip Yee Por, Mohammad Shahbaz Khan, Rim Hamdaoui, Lal Hussain, Zahoor Iqbal, Ionela Magdalena Rotaru, Dan Dobrotă, Moutaz Aldrdery, Abdulfattah Omar, Enhancing lung cancer detection through hybrid features and machine learning hyperparameters optimization techniques, 2024, 10, 24058440, e26192, 10.1016/j.heliyon.2024.e26192 | |
5. | Seong‐O Shim, Lal Hussain, Wajid Aziz, Abdulrahman A. Alshdadi, Abdulrahman Alzahrani, Abdulfattah Omar, Deep learning convolutional neural network ResNet101 and radiomic features accurately analyzes mpMRI imaging to predict MGMT promoter methylation status with transfer learning approach, 2024, 34, 0899-9457, 10.1002/ima.23059 | |
6. | Laís Silva Santana, Jordana Borges Camargo Diniz, Luisa Mothé Glioche Gasparri, Alessandra Buccaran Canto, Sávio Batista dos Reis, Iuri Santana Neville Ribeiro, Eberval Gadelha Figueiredo, João Paulo Mota Telles, Application of Machine Learning for Classification of Brain Tumors: A Systematic Review and Meta-Analysis, 2024, 186, 18788750, 204, 10.1016/j.wneu.2024.03.152 |
Tumor type | Number of Patients | Number of MR images | MRI view | Number of MR images |
Meningioma | 82 | 708 | Axial Coronal Sagittal |
209 268 231 |
Glioma | 89 | 1426 | Axial Coronal Sagittal |
494 437 495 |
Pituitary | 62 | 930 | Axial Coronal Sagittal |
291 319 320 |
Total | 233 | 3064 | 3064 |
Features | Formulas | Description |
Contrast (t) | K∑x=1K∑y=1(x−y)2pxy | It is used to measure the contract between current pixel and its neighbor. |
Correlation (ρ) | K∑x=1K∑y=1(x−μx)(y−μy)pxyσxσy | It is used to measure the degree of correlation between current pixel and its neighbor. |
Dissimilarity (Dis) | K∑x=1K∑y=1|x−y|pxy | It is used to measure the difference in images. |
Entropy | K∑x=1K∑y=1pxy(−lnpxy) | It is used to get the encoded information from an image. |
Energy (n) | K∑x=1K∑y=1pxy2 | It is used to measure the uniformity of an image. |
Homogeneity (h) | K∑x=1K∑y=1pxy1+|x−y| | It is used to calculate the spatial closeness of elements in G to the diagonal of the matrix. |
Randomness (r) | −K∑x=1K∑y=1pxylog2pxy | It is used to measure the randomness of the elements of the GLCM. |
Mean (µ) | μx=K∑x=1K∑y=1x(pxy)μy=K∑x=1K∑y=1y(pxy) | This formula is used to calculate the sum of all values and P is the probability mass function. |
Variance (σ2) | σ2x=K∑x=1K∑y=1(pxy)(x−μx)2σ2y=K∑x=1K∑y=1(pxy)(y−μy)2 | This equation is used to measure how far a set of numbers is spread out from their mean. |
Standard Deviation (σ) | σx= √σ2x & σy = √σ2y | It is used to quantify the amount of dispersion of different values of a data set. |
Features | Formulas | Description |
Area (A) | Total number of pixels in a region | Total count of pixels that a specific region of image contains |
Perimeter (P) | Pixels at the boundary of an image | Total count of pixels at the boundary of the image |
Solidity | AreaConvexArea | To calculate the density of an object, ratio between area and full convex object. |
Roundness | 4×Π×Area(ConvexPerimeter)2 | This equation is used to illustrate the difference between line and circle from other region of image. |
Convex Area | Total no of pixels in a convex image | It is used to count total no of pixels in convex image. |
Convexity | ConvexPerimeterPerimeter | This equation is used to calculate the perimeter ratio between object itself and convex full of object. |
Compactness | 4×Π×Area(Perimeter)2 | It is used to find the degree of deviation from a circle. This shows the ratio between the object areas with circle area. |
Maximum Radius (MaxR) | MAX(DISTANCE(C(x, y), BOUNDARY(x, y))) | This formula is used to calculate the maximum distance from boundary of the image to the center of the image, x and y are two points on the image. |
Minimum Radius (MINR) | MIN(DISTANCE(C(x, y), BOUNDARY(x, y))) | This formula is used to calculate the minimum distance from boundary of the image to the center of the image. |
Euler Number (EUL_NO) | No of objects in region – No of holes in these objects | This formula provides the difference between effected and unaffected area of an image. |
Standard Deviation | √1nn∑i=1(xi−−x)2 | It is used to calculate the contrast of an image. |
Entropy | ∑(p∗log2(p))2 | This equation shows the statistical measure which can be used to get the texture of the image. |
Eccentricity (ECT) | √(MAXR−MINRMAXR)2 | This formula represents the ratio of distance between major axis and ellipse focal. Value can be 0-1. |
Rectangularity | AreaMAXR−MINR | This formula is used to identify the similarity of image shape with rectangular shape. |
Elongation | 1−MINRMAXR | This formula is used to measure the length of the object. |
Class | Sens. | Spec. | PPV | NPV | FPR | Acc. | AUC |
LDA | |||||||
Glioma | 100% | 16.09% | 47.79% | 100% | 0.839 | 52.54% | 0.5804 |
meningioma | 100% | 100% | 0 | 100% | |||
Pituitary | 39.78% | 100% | 100% | 94.31% | 0 | 94.52% | 0.6989 |
SVM Linear | |||||||
Glioma | 100% | 54.42% | 60.67% | 100% | 0.455 | 73.24% | 0.7720 |
meningioma | 100% | 100% | 0 | 100% | |||
Pituitary | 75.48% | 100% | 100% | 89.67% | 0 | 92.16% | 0.8774 |
SVM Quadratic | |||||||
Glioma | 100% | 62.30% | 63.95% | 100% | 0.376 | 77.42% | 0.8115 |
meningioma | 52.28% | 99.79% | 93.02% | 97.54% | 0.0020 | 97.42% | 0.7604 |
Pituitary | 78.57% | 100% | 100% | 90.78% | 0 | 93.11% | 0.8928 |
SVM Cubic | |||||||
Glioma | 100% | 71.25% | 64.91% | 100% | 0.2875 | 81.23% | 0.8562 |
meningioma | 46.80% | 98.56% | 83.89% | 92.04% | 0.014 | 91.41% | 0.7268 |
Pituitary | 77.90% | 100% | 100% | 90.79% | 0 | 93.04% | 0.8895 |
SVM Cosine | |||||||
Glioma | 100% | 75.50% | 67.42% | 100% | 0.2449 | 83.74% | 0.8775 |
meningioma | 47.61% | 99.29% | 90.45% | 93.08% | 0.007 | 92.92% | 0.7345 |
Pituitary | 71.79% | 100% | 100% | 85.71% | 0 | 89.95% | 0.8589 |
Class | Sensitivity | Specificity | PPV | NPV | FPR | Accuracy | AUC |
LDA | |||||||
Glioma | 100% | 82.38% | 78.95% | 100% | 0.1761 | 89.39% | 0.9119 |
meningioma | 69.18% | 99.32% | 95.66% | 93.75% | 0.0067 | 93.99% | 0.8425 |
Pituitary | 89.07 | 100% | 100% | 95.24% | 0 | 96.57% | 0.9453 |
SVM Linear | |||||||
Glioma | 100% | 84.64% | 81.45% | 100% | 0.153 | 90.83% | 0.9232 |
meningioma | 71.50% | 99.60% | 97.46% | 94.26% | 0.004 | 94.68% | 0.8555 |
Pituitary | 88.32% | 100% | 100% | 94.63% | 0 | 96.18% | 0.9416 |
SVM Quadratic | |||||||
Glioma | 100% | 91.18% | 89.53% | 100% | 0.088 | 94.97% | 0.9559 |
meningioma | 84.51% | 99.63% | 98.31% | 96.20% | 0.0036 | 96.57% | 0.9207 |
Pituitary | 93.89% | 100% | 100% | 97.31% | 0 | 98.10% | 0.9699 |
SVM Cubic | |||||||
Glioma | 100% | 92.27% | 91.38% | 100% | 0.072 | 95.88% | 0.9635 |
meningioma | 87.34% | 99.62% | 98.47% | 96.60% | 0.0038 | 96.96% | 0.9348 |
Pituitary | 97.78% | 100% | 100% | 99.07% | 0 | 99.34% | 0.9892 |
SVM Cosine | |||||||
Glioma | 100% | 87.51% | 84.01% | 100% | 0.1248 | 92.46% | 0.9375 |
meningioma | 72.98% | 99.76% | 98.66% | 93.72% | 0.0024 | 94.45% | 0.8636 |
Pituitary | 87.27% | 100% | 100% | 94.14% | 0 | 95.58% | 0.9336 |
Author | Feature/Methods | Performance |
Machhale et al. [7] | SVM-KNN | Sensitivity: 100% Specificity: 93.75% Accuracy: 98% |
Zacharaki et al. [54] | Cross-Validation Using different Classifiers (LDA, k-NN, SVM) | Sensitivity: 75% Specificity: 100% Accuracy: 96.4% |
Badža and Barjaktarović [26] | CNN | Accuracy 95.40% |
Gumaei et al. [27] | Regularized extreme learning machine (RELM) | Accuracy 94.23% |
Swati et al. [4] | Automatic content-based image retrieval (CBIR) system | Average precision 96.13% |
Huang et al. [28] | convolutional neural network based on complex networks (CNNBCN) | Accuracy 95.49% |
Afshar et al. [52] | Capsule Network Method | Accuracy: 86.56% |
Zia et al. [25] | Window Based Image Cropping | Sensitivity:86.26% Specificity:90.90% Accuracy: 85.69% |
Sajjad et al. [24] | CNN with data augmentation | Sensitivity:88.41% Secificity:96.12% Accuracy: 94.58% |
Cheng et al. [22] | Feature extraction methods: Intensity Histogram GLCM BOW Classification Methods: SVM SRC KNN |
Accuracy:91.28% |
Abiwinanda et. al. [23] | CNN | Accuracy: 84.19% |
Anaraki et al. [55] | Genetic Algorithms | Accuracy: 94.2% |
Paul et al. [53] | NN | Accuracy: 91.43% |
Sachdeva et al. [56] | Segmentation and Feature extraction | Highest accuracy 96.67% |
This work | RICA Based Features SVM Cubic with Multiclass classification 1) Pituitary 2) Meningioma 3) Glioma |
1) Accuracy: 99.34%, AUC: 0.9892 2) Accuracy: 96.96%, AUC: 0.9348 3) Accuracy: 95.88%, AUC: 0.9635 |
Tumor type | Number of Patients | Number of MR images | MRI view | Number of MR images |
Meningioma | 82 | 708 | Axial Coronal Sagittal |
209 268 231 |
Glioma | 89 | 1426 | Axial Coronal Sagittal |
494 437 495 |
Pituitary | 62 | 930 | Axial Coronal Sagittal |
291 319 320 |
Total | 233 | 3064 | 3064 |
Features | Formulas | Description |
Contrast (t) | K∑x=1K∑y=1(x−y)2pxy | It is used to measure the contract between current pixel and its neighbor. |
Correlation (ρ) | K∑x=1K∑y=1(x−μx)(y−μy)pxyσxσy | It is used to measure the degree of correlation between current pixel and its neighbor. |
Dissimilarity (Dis) | K∑x=1K∑y=1|x−y|pxy | It is used to measure the difference in images. |
Entropy | K∑x=1K∑y=1pxy(−lnpxy) | It is used to get the encoded information from an image. |
Energy (n) | K∑x=1K∑y=1pxy2 | It is used to measure the uniformity of an image. |
Homogeneity (h) | K∑x=1K∑y=1pxy1+|x−y| | It is used to calculate the spatial closeness of elements in G to the diagonal of the matrix. |
Randomness (r) | −K∑x=1K∑y=1pxylog2pxy | It is used to measure the randomness of the elements of the GLCM. |
Mean (µ) | μx=K∑x=1K∑y=1x(pxy)μy=K∑x=1K∑y=1y(pxy) | This formula is used to calculate the sum of all values and P is the probability mass function. |
Variance (σ2) | σ2x=K∑x=1K∑y=1(pxy)(x−μx)2σ2y=K∑x=1K∑y=1(pxy)(y−μy)2 | This equation is used to measure how far a set of numbers is spread out from their mean. |
Standard Deviation (σ) | σx= √σ2x & σy = √σ2y | It is used to quantify the amount of dispersion of different values of a data set. |
Features | Formulas | Description |
Area (A) | Total number of pixels in a region | Total count of pixels that a specific region of image contains |
Perimeter (P) | Pixels at the boundary of an image | Total count of pixels at the boundary of the image |
Solidity | AreaConvexArea | To calculate the density of an object, ratio between area and full convex object. |
Roundness | 4×Π×Area(ConvexPerimeter)2 | This equation is used to illustrate the difference between line and circle from other region of image. |
Convex Area | Total no of pixels in a convex image | It is used to count total no of pixels in convex image. |
Convexity | ConvexPerimeterPerimeter | This equation is used to calculate the perimeter ratio between object itself and convex full of object. |
Compactness | 4×Π×Area(Perimeter)2 | It is used to find the degree of deviation from a circle. This shows the ratio between the object areas with circle area. |
Maximum Radius (MaxR) | MAX(DISTANCE(C(x, y), BOUNDARY(x, y))) | This formula is used to calculate the maximum distance from boundary of the image to the center of the image, x and y are two points on the image. |
Minimum Radius (MINR) | MIN(DISTANCE(C(x, y), BOUNDARY(x, y))) | This formula is used to calculate the minimum distance from boundary of the image to the center of the image. |
Euler Number (EUL_NO) | No of objects in region – No of holes in these objects | This formula provides the difference between effected and unaffected area of an image. |
Standard Deviation | √1nn∑i=1(xi−−x)2 | It is used to calculate the contrast of an image. |
Entropy | ∑(p∗log2(p))2 | This equation shows the statistical measure which can be used to get the texture of the image. |
Eccentricity (ECT) | √(MAXR−MINRMAXR)2 | This formula represents the ratio of distance between major axis and ellipse focal. Value can be 0-1. |
Rectangularity | AreaMAXR−MINR | This formula is used to identify the similarity of image shape with rectangular shape. |
Elongation | 1−MINRMAXR | This formula is used to measure the length of the object. |
Class | Sens. | Spec. | PPV | NPV | FPR | Acc. | AUC |
LDA | |||||||
Glioma | 100% | 16.09% | 47.79% | 100% | 0.839 | 52.54% | 0.5804 |
meningioma | 100% | 100% | 0 | 100% | |||
Pituitary | 39.78% | 100% | 100% | 94.31% | 0 | 94.52% | 0.6989 |
SVM Linear | |||||||
Glioma | 100% | 54.42% | 60.67% | 100% | 0.455 | 73.24% | 0.7720 |
meningioma | 100% | 100% | 0 | 100% | |||
Pituitary | 75.48% | 100% | 100% | 89.67% | 0 | 92.16% | 0.8774 |
SVM Quadratic | |||||||
Glioma | 100% | 62.30% | 63.95% | 100% | 0.376 | 77.42% | 0.8115 |
meningioma | 52.28% | 99.79% | 93.02% | 97.54% | 0.0020 | 97.42% | 0.7604 |
Pituitary | 78.57% | 100% | 100% | 90.78% | 0 | 93.11% | 0.8928 |
SVM Cubic | |||||||
Glioma | 100% | 71.25% | 64.91% | 100% | 0.2875 | 81.23% | 0.8562 |
meningioma | 46.80% | 98.56% | 83.89% | 92.04% | 0.014 | 91.41% | 0.7268 |
Pituitary | 77.90% | 100% | 100% | 90.79% | 0 | 93.04% | 0.8895 |
SVM Cosine | |||||||
Glioma | 100% | 75.50% | 67.42% | 100% | 0.2449 | 83.74% | 0.8775 |
meningioma | 47.61% | 99.29% | 90.45% | 93.08% | 0.007 | 92.92% | 0.7345 |
Pituitary | 71.79% | 100% | 100% | 85.71% | 0 | 89.95% | 0.8589 |
Class | Sensitivity | Specificity | PPV | NPV | FPR | Accuracy | AUC |
LDA | |||||||
Glioma | 100% | 82.38% | 78.95% | 100% | 0.1761 | 89.39% | 0.9119 |
meningioma | 69.18% | 99.32% | 95.66% | 93.75% | 0.0067 | 93.99% | 0.8425 |
Pituitary | 89.07 | 100% | 100% | 95.24% | 0 | 96.57% | 0.9453 |
SVM Linear | |||||||
Glioma | 100% | 84.64% | 81.45% | 100% | 0.153 | 90.83% | 0.9232 |
meningioma | 71.50% | 99.60% | 97.46% | 94.26% | 0.004 | 94.68% | 0.8555 |
Pituitary | 88.32% | 100% | 100% | 94.63% | 0 | 96.18% | 0.9416 |
SVM Quadratic | |||||||
Glioma | 100% | 91.18% | 89.53% | 100% | 0.088 | 94.97% | 0.9559 |
meningioma | 84.51% | 99.63% | 98.31% | 96.20% | 0.0036 | 96.57% | 0.9207 |
Pituitary | 93.89% | 100% | 100% | 97.31% | 0 | 98.10% | 0.9699 |
SVM Cubic | |||||||
Glioma | 100% | 92.27% | 91.38% | 100% | 0.072 | 95.88% | 0.9635 |
meningioma | 87.34% | 99.62% | 98.47% | 96.60% | 0.0038 | 96.96% | 0.9348 |
Pituitary | 97.78% | 100% | 100% | 99.07% | 0 | 99.34% | 0.9892 |
SVM Cosine | |||||||
Glioma | 100% | 87.51% | 84.01% | 100% | 0.1248 | 92.46% | 0.9375 |
meningioma | 72.98% | 99.76% | 98.66% | 93.72% | 0.0024 | 94.45% | 0.8636 |
Pituitary | 87.27% | 100% | 100% | 94.14% | 0 | 95.58% | 0.9336 |
Author | Feature/Methods | Performance |
Machhale et al. [7] | SVM-KNN | Sensitivity: 100% Specificity: 93.75% Accuracy: 98% |
Zacharaki et al. [54] | Cross-Validation Using different Classifiers (LDA, k-NN, SVM) | Sensitivity: 75% Specificity: 100% Accuracy: 96.4% |
Badža and Barjaktarović [26] | CNN | Accuracy 95.40% |
Gumaei et al. [27] | Regularized extreme learning machine (RELM) | Accuracy 94.23% |
Swati et al. [4] | Automatic content-based image retrieval (CBIR) system | Average precision 96.13% |
Huang et al. [28] | convolutional neural network based on complex networks (CNNBCN) | Accuracy 95.49% |
Afshar et al. [52] | Capsule Network Method | Accuracy: 86.56% |
Zia et al. [25] | Window Based Image Cropping | Sensitivity:86.26% Specificity:90.90% Accuracy: 85.69% |
Sajjad et al. [24] | CNN with data augmentation | Sensitivity:88.41% Secificity:96.12% Accuracy: 94.58% |
Cheng et al. [22] | Feature extraction methods: Intensity Histogram GLCM BOW Classification Methods: SVM SRC KNN |
Accuracy:91.28% |
Abiwinanda et. al. [23] | CNN | Accuracy: 84.19% |
Anaraki et al. [55] | Genetic Algorithms | Accuracy: 94.2% |
Paul et al. [53] | NN | Accuracy: 91.43% |
Sachdeva et al. [56] | Segmentation and Feature extraction | Highest accuracy 96.67% |
This work | RICA Based Features SVM Cubic with Multiclass classification 1) Pituitary 2) Meningioma 3) Glioma |
1) Accuracy: 99.34%, AUC: 0.9892 2) Accuracy: 96.96%, AUC: 0.9348 3) Accuracy: 95.88%, AUC: 0.9635 |