
Systematic risk estimation is widely applied by investors and managers in order to predict risks in the market. One of the most applied measures of risk is the so-called Capital Asset Pricing Model, shortly CAPM. It has been studied empirically focusing on the impact of return interval on the betas. This paper lies in this topic and attempts to estimate the CAPM at different time scales for GCC markets by adapting a wavelet method to examine the relationship between the return of the stock and its systematic risk at different time scales. The main novelty is by applying non-uniform intervals of time. Differently from existing literature, we use random ones. The proposed procedure is acted empirically on a sample corresponding to Saudi Tadawul market as the most important GCC representative market actively traded over the period January 01, 2013 to September 20, 2018, which is characterized by many political, economic and financial movements such as Qatar embargo, Yemen war, NEOM project, 2030 KSA vision and the Arab spring effects. The findings in the present work may be good basis for understanding current and future GCC markets situation and may be thus a basis for investors' decisions in such markets.
Citation: Anouar Ben Mabrouk. Wavelet-based systematic risk estimation: application on GCC stock markets: the Saudi Arabia case[J]. Quantitative Finance and Economics, 2020, 4(4): 542-595. doi: 10.3934/QFE.2020026
[1] | Jingxu Xiao, Chaowen Chang, Yingying Ma, Chenli Yang, Lu Yuan . Secure multi-path routing for Internet of Things based on trust evaluation. Mathematical Biosciences and Engineering, 2024, 21(2): 3335-3363. doi: 10.3934/mbe.2024148 |
[2] | Abhishek Savaliya, Rutvij H. Jhaveri, Qin Xin, Saad Alqithami, Sagar Ramani, Tariq Ahamed Ahanger . Securing industrial communication with software-defined networking. Mathematical Biosciences and Engineering, 2021, 18(6): 8298-8313. doi: 10.3934/mbe.2021411 |
[3] | Jinjing Huang, Xi Wang . Influenced node discovery in a temporal contact network based on common nodes. Mathematical Biosciences and Engineering, 2023, 20(8): 13660-13680. doi: 10.3934/mbe.2023609 |
[4] | Lei Jin, Sixiang Lin, Binglei Xie, Lin Liu . A vulnerability-based vehicle routing approach for solving capacitated arc routing problem in urban snow plowing operations. Mathematical Biosciences and Engineering, 2021, 18(1): 166-181. doi: 10.3934/mbe.2021009 |
[5] | Zhibin Zuo, Rongyu He, Xianwei Zhu, Chaowen Chang . A novel software-defined network packet security tunnel forwarding mechanism. Mathematical Biosciences and Engineering, 2019, 16(5): 4359-4381. doi: 10.3934/mbe.2019217 |
[6] | Jiushuang Wang, Ying Liu, Huifen Feng . IFACNN: efficient DDoS attack detection based on improved firefly algorithm to optimize convolutional neural networks. Mathematical Biosciences and Engineering, 2022, 19(2): 1280-1303. doi: 10.3934/mbe.2022059 |
[7] | Ningning Zhao, Shihao Cui . Study on 4D taxiing path planning of aircraft based on spatio-temporal network. Mathematical Biosciences and Engineering, 2023, 20(3): 4592-4608. doi: 10.3934/mbe.2023213 |
[8] | Tingting Yang, Yi He . Design of intelligent robots for tourism management service based on green computing. Mathematical Biosciences and Engineering, 2023, 20(3): 4798-4815. doi: 10.3934/mbe.2023222 |
[9] | Hao Yuan, Qiang Chen, Hongbing Li, Die Zeng, Tianwen Wu, Yuning Wang, Wei Zhang . Improved beluga whale optimization algorithm based cluster routing in wireless sensor networks. Mathematical Biosciences and Engineering, 2024, 21(3): 4587-4625. doi: 10.3934/mbe.2024202 |
[10] | Meng Jiang, Bo Zhou, Lei Chen . Identification of drug side effects with a path-based method. Mathematical Biosciences and Engineering, 2022, 19(6): 5754-5771. doi: 10.3934/mbe.2022269 |
Systematic risk estimation is widely applied by investors and managers in order to predict risks in the market. One of the most applied measures of risk is the so-called Capital Asset Pricing Model, shortly CAPM. It has been studied empirically focusing on the impact of return interval on the betas. This paper lies in this topic and attempts to estimate the CAPM at different time scales for GCC markets by adapting a wavelet method to examine the relationship between the return of the stock and its systematic risk at different time scales. The main novelty is by applying non-uniform intervals of time. Differently from existing literature, we use random ones. The proposed procedure is acted empirically on a sample corresponding to Saudi Tadawul market as the most important GCC representative market actively traded over the period January 01, 2013 to September 20, 2018, which is characterized by many political, economic and financial movements such as Qatar embargo, Yemen war, NEOM project, 2030 KSA vision and the Arab spring effects. The findings in the present work may be good basis for understanding current and future GCC markets situation and may be thus a basis for investors' decisions in such markets.
Machine-learning algorithms are well-known approaches with a wide range of applications in biomedical signal processing, such as classification [1], regression [2], and optimization [3]. In medical applications, diagnosis, recognition, or prediction are the most important tasks that can be handled using classification techniques [4]. In literature, a wide range of applications can be found based on the classification techniques such as emotion recognition [5], text classification [6], activity recognition [7], and epileptic seizure classification [8]. In most classification techniques, the classification performance strongly depends on the input features of the classification algorithms. Hence, feature extraction, the process of summarizing the data into some indexes, and feature selection, the process of finding the optimum combination of the extracted features, play important roles in classification approaches [9].
Among the diverse features that can be extracted from any data, fractal-based features are known as powerful nonlinear tools for measuring data complexity [10]. For example, it was shown that an individual's eye movements are closely related to fractal patterns [11]. Also, it was revealed that the fractality of the electroencephalography (EEG) signals is significantly reduced in schizophrenia [12]. Furthermore, it found that the memory content could increase the fractality of the EEG signals [13]. Another class of powerful features is the features derived from transformed data. For instance, the features based on the time-frequency transforms strongly help to predict atrial fibrillation [14], diagnosis of heart failure [15], and detection of mitral valve prolapse [16]. Such transforms are sometimes not directly used for feature extraction; however, they can be beneficial in the data preparation processes prior to feature extraction [17].
The study of the brain's cognitive functions, such as working memory and over and covert visual attention, are interesting areas of research. Thus, many studies have been conducted to investigate how these top-down functions influence neural spiking activity in different areas of the brain [18,19,20]. Nonetheless, different studies claimed that no memory-related modulation could be found in the neural spiking activity of the middle temporal (MT) cortex [21,22,23]. On the other hand, a recent study showed that working memory increased the fractionality of the firing rate signals [24]. Many studies have shown the neural correlate of working memory as an increase in the firing rate of neurons in the prefrontal [19,25,26], parietal [27], and visual [25] cortices; however, no such memory-related change in the average spiking activity of neurons has been reported in extrastriate cortex including V4 [19] and the middle temporal (MT) cortex [19,25] even after applying machine learning techniques [25]. In this paper, we will focus on the maintenance of spatial information, which has been shown to be sent directly from frontal eye field to the extrastriate cortex through feedback connections in form of persistent spiking activity [19]. Although spatial working memory does not increase the average firing rate of extrastriate neurons, it significantly enhances the sensitivity of individual neurons to incoming visual stimuli [26], alters the correlated activity of the population of neurons [18], increases the power of the local field potential (LFP) in the frequency band of alpha-beta, enhances the spike-phase coherency of MT responses in the same frequency range [28] and increases the fractionality of MT spiking activity [24].
As mentioned above, a recent study revealed that when the spiking activity of neurons is mapped to the fractal dimension feature space, the content of working memory can be captured [24]. In this study, we examined whether a set of linear and/or nonlinear features could reveal the deployment of spatial working memory from the spiking activity of neurons in the area MT. In this regard, we used two different learners, three feature selection algorithms, and two cross-validation methods to show the robustness of our results. The remaining parts of the paper are arranged as follows: Section 2 describes the studied data, extracted features, selection methods, classification algorithms, and classification assessment criteria. Section 3 presents the results, and Section 4 discusses the results and concludes the paper.
All experimental procedures were performed under the National Institutes of Health Guide for the Care and Use of Laboratory Animals, the Society for Neuroscience Guidelines and Policies. The protocols for all experimental, surgical, and behavioral procedures were approved by the Montana State University Institutional Animal Care and Use Committee.
In this study, the spiking activity of 131 neurons (stored at 32 kHz), recorded in 11 sessions using electrode arrays, was used. These signals were recorded from the area MT (Figure 1a) of the two male macaque monkeys' brains (five and seven years old). The monkeys, already acquainted with carrying out the memory-guided saccade (MGS) task, were positioned on a customized chair in front of a monitor (24 inches with 144 Hz refresh rate) at a distance of 28.5 cm from their eyes. Initially, during a surgery in which the monkeys were anesthetized, the recording chambers were mounted on the monkeys' skulls in the MT area. The recordings of single electrodes were used to confirm that the chambers are mounted in a desired (MT) area of the monkeys' brains, During the task, the monkeys' heads were restrained, and they received juice as a reward through a syringe pump. This reward delivery, as well as the visual stimulus presentation procedures, were controlled using the MonkeyLogic toolbox in MATLAB software. Moreover, a photodiode was used to record the actual time of the visual stimulus incidence. Then the recorded data were digitalized with a sampling frequency of 32 kHz and stored.
The MGS task commenced with the appearance of a fixation point (FP) in the center of the monitor. In this period, called the fixation period, the monkeys are required to fixate at the FP for 1000 ms. After that, while the monkeys were fixating on the FP, a visual stimulus appeared in one of the positions, whether IN (same visual hemifield as the neuron's receptive field; displayed with red dots) or OUT (opposite hemifield relative to the RF of the recorded neuron; displayed with a green dot) conditions shown in Figure 1b. The stimulus remained for 1000 ms and then disappeared (visual period). As soon as the visual cue disappeared, the memory period started and lasted for 1000 ms. During the memory period, the monkeys had to keep gazing at the FP and memorize the location of the disappeared cue. Finally, the monkeys were obliged to make a saccade to the remembered location after the disappearance of the FP to receive a reward. The MGS phases are simply portrayed in Figure 1c, along with the recorded spiking activity for a sample MT neuron during the MGS task (more details can be found in see [18,19]).
Data preparation plays a vital role in obtaining the best results for machine-learning approaches. Here, in the first step, the average spiking activity of individual neurons across trials was obtained. In the next step, the average firing rate signals of each neuron in IN conditions and OUT were considered for extracting the features. In order to eliminate the spiking activity related to the disappearance of the visual stimulus in the memory period, the first 400 ms of this period were trimmed for the feature extraction step. In the final step, the smoothed signals were considered for the following processes.
Extracting distinguishable features is an essential step in detecting the presence of the working memory. This section introduces the six most used fractal features and some statistical measures. Also, four frequently used transforms in signal processing are described.
Fractal dimension (FD) is an index of complexity that refers to an object's non-integer dimension geometrically [29]. FD can be obtained using different algorithms; however, in general, it can be obtained based on the number of blocks forming a pattern or covering the data graph. Here, six famous algorithms for calculating the FD are described.
Higuchi fractal dimension (HFD): HFD is an accurate box-counting method that is the main algorithm for calculating the FD of a graph [30]. Any time series xt:x(1),x(2),…,x(N) with a finite number of samples (N) can be expressed as the k sets of xkm where
xm(k)={x(m),x(m+k),x(m+2k),…,x(m+kR)}. | (1) |
Here, m=1,2,…,k is the initial time, k=1,2,…,km is the delay, and R=[N−mk]. Note that […] denotes the integer part of the internal number. Accordingly, the curve length of each subset can be defined as
Lkm=N−1Rk2R∑i=1|xm+ik−xm+(i−1)k|, | (2) |
Letting ⟨Lk⟩m defines the average value of m curve lengths, it can be written that
⟨Lk⟩m∝k−D, | (3) |
where D is FD obtained using the Higuchi algorithm. In this paper, km=30 was determined based on trial and error.
Katz fractal dimension (KFD): KFD is a distance-based technique for obtaining the FD value that uses the averaged Euclidean distance of two consecutive points in a time series [31]. If L defines the sum of Euclidean distance between every two successive samples (L=∑N−1i=1distEuclidean(ni,ni+1)) and Lm denotes the maximum value of the Euclidean distance between the first and jth sample (Lm=max(distEuclidean(n1,nj))forj=2,…,N), the FD can be obtained as
D=log(N)log(NdL). | (4) |
In the above equation, N is the number of samples in a given time series, and D is the FD value with the Katz technique.
Generalized Hurst exponent (GHE): GHE is known as a fractal index indicating the long-rage dependence of time series that uses the qth-order moment of the distribution [32]. Assuming x(t) as the studied time series with N samples, the the qth-order moment of the distribution can be defined as
Kτq=⟨|x(t)−x(t−τ)|q⟩t⟨|x(t)|q⟩, | (5) |
where τ is the delay and ⟨…⟩t indicates the average value of the internal value over the total duration. Then, the fractal dimension of the time series can be obtained through
Kτq∝(τTs)qH, | (6) |
where Ts is the sampling time, and H refers to the GHE of the time series. In this study, the first-order moment was considered.
Margaos and Sun fractal dimension (MSFD): MSFD is a morphological method for calculating FD since it tries to cover the data graph employing the morphological operators, i.e., erosion and dilation [33]. The support-limited erosions and dilations with the support set s (s=1,2,…,N) and using the structuring element of b can be formulated as
erosion:{xn⊕sbk=max{xn−1,xn,xn+1}fork=1xn⊕sbk=max{xn−1⊕sbk,xn+1⊕sbk}fork≥2,dilation:{xn⊖sbk=min{xn−1,xn,xn+1}fork=1xn⊖sbk=min{xn−1⊖sbk,xn+1⊖sbk}fork≥2, | (7) |
where k=1,2,…,km. Consequently, the morphological cover that surrounds the data graph can be obtained as
Ck=N∑n=1[(xn⊕sbk)−(xn⊖sbk)]. | (8) |
Here km is set according to the rule mentioned in [34]. Finally, the morphological FD of the time series can be acquired as the angular coefficient of linear regression of ln(Ck(2kN)2) vs. ln(12kN).
Leibovich and Toth fractal dimension (LTD): LTD is the fast implementation of the box-counting algorithm [35]. The box-counting algorithm is based on the number of blocks to which the data graph can be split. So, the FD can be found as
D=limϵ→0log(nblocks(d))log(1d), | (9) |
where d is the size of the blocks and D is the FD base on the original box-counting method. The LTD algorithm implements the box-counting method by withdrawing too small and too large blocks; therefore, it is faster than the original algorithm [36].
Fractal Volatility (FV): FV computes the FD of a time series using the box-counting method with the rand-walk process [37]. In other words, it splits the data into blocks of size d by performing the random-walk process.
Discrete Wavelet Transform (DWT): DWT describes any data by a set of weighted orthonormal wavelets called mother wavelets [38] according to the following relation
Xm,n=N∑n=1xn1√2mψ(2−mt−n), | (10) |
where N is the total number of samples, Xm,n is the transformed data, ψ is the orthonormal wavelet. Moreover, m and n are two control parameters regarding the translation and dilation operation of the discrete wavelet and the data. DWT includes temporal information as well as frequency information in different scales. In this paper, the db4 was selected as the orthonormal wavelet for further analysis.
Discrete Fourier Transform (DFT): DFT is the most fundamental transformation that describes the data based on some weighted sinusoidal functions. DFT can be defined
Xk=N∑n=1xne−j2πNkn, | (11) |
where k=1,2,…,N and Xk is the transformes data for a specific frequency.
Discrete Short-Time Fourier Transform (DSTFT): DSTFT is the time-frequency version of the DFT transform that contains only the frequency information. Thus, DSTFT can be helpful when DFT becomes insufficient for analysis, such as nonstationary signals [39]. DSTFT is defined as
Xk=N∑n=1xnω(n−m)e−j2πNkn, | (12) |
where ω(L) is the temporal window of size L.
Discrete Stockwell Transform (DST): DST is the extension of the DWT while having a close relationship with DSTFT. Also, it provides a frequency-dependent resolution since the sinusoidal functions are fixed in time, and a scalable Gaussian window operates the dilation and translation [40]. DST can be described as
S(iTs,nNTs)=N−1∑m=0H(m+nNTs)e−2π2m2n2ej2πmiN | (13) |
where Ts is the inverse sampling frequency, iTs defines the window τ, nNTs is the frequency domain, and H(n) is the DFT of the input data.
Moments of distribution are basically statistical measures, the primary features that can be simply obtained from any data. The first-, second-, third-, and forth-order moments are called mean, variance, skewness, and kurtosis, which are described as follows
M1=N∑n=1xnN, | (14) |
M2=N∑n=1(xn−M1)2N, | (15) |
M3=N∑n=1(xn−μ1)3N.M322, | (16) |
M4=N∑n=1(xn−μ1)4N.M22. | (17) |
It should be noted that N is the number of data samples.
Median is another statistical measure that can be helpful whenever the mean of that data is not a good measure of the distribution. The median can be defined as follows
med={x(n2)n∈2k12(x(n−12)+x(n+12))n∈2k−1, | (18) |
where k=1,2,…,N. Maximum and minimum values of the samples within a specific period are the other two famous statistical measures used in this paper.
In signal processing, feature selection is an optimization method leading to the optimum features for classification. Therefore, feature selection was used as a preprocessing step for machine-learning problems and is of particular importance when the data or extracted features are of high dimension [41]. In this subsection, three popular algorithms, namely, Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Ant Colony Optimization (ACO) algorithms, are briefly described.
GA is an evolutionary algorithm that can be applied for search and optimization problems inspired by species' natural selection and evolution. In this algorithm, a population of chromosomes, which are, in fact, the solution to the problem, are directly selected for the next generation. Based on the selected species, the new generation, including a few numbers of new chromosomes, is also created using crossover and mutation techniques. This process is heuristically repeated until the solution converges to the optimal value [42]. In this article, five primary chromosomes and 100 repetitions with a crossover rate of 0.6 and a mutation rate of 0.001 were selected as the initial parameters of the GA algorithm for selecting the optimum features.
PSO is an evolutionary computational algorithm that tries to find an optimized solution inspired by natural social behaviors, such as the group behavior observed in schools of fish and flocks of birds. According to this idea, some initial particles are localized in the search space, each having its own position and velocity. These particles move within the search space to find the best solution. In general, particles make their next moves based on their best-experienced position as well as the best-experienced position of the whole population of particles called a swarm. So, the movement of the particles in the search space is not dependent on the gradient, and therefore, it can be applied for differentiable and non-differentiable optimization problems [43]. In this paper, we set cognitive and social factors C1=C2=2 and inertia weight w=1 with five initial particles and 100 maximum iterations as the assumed parameters for the PSO algorithm used for obtaining the optimum features.
ACO is a probabilistic and graph-based evolutionary algorithm that was first proposed based on the natural behavior of ants in hunting their food. In real life, ants leave pheromones on the pass to guide others to resources as they explore their surroundings, and thus, they can find the shortest pass to the food. Inspired by this cooperative-based technique, optimization problems can be solved and handled. First, some initial artificial ants are positioned in the parameter space that move to a solution stochastically. The pheromone trails, which specify the edges of the graph in the ACO algorithm, are obtained for each ant, and the best solution is selected. For the following steps, the edges of the graph become updated and guide the artificial ants toward the solution. This process is repeated iteratively until the solution converges to an optimal value [44]. In this paper, five initial artificial ants, 100 allowed iterations, α=τ=η=1, ρ=0.2, and β=0.1 are selected as the parameters needed for applying the ACO algorithm to find optimum features.
Feature classification is the process of assigning or predicting the label of new data based on the trained model or information gained from the observed data [45]. The Support Vector Machine (SVM) and the K-Nearest Neighbor (KNN) classifiers are two supervised and most-used machine-learning algorithms for classification, which are briefly described below.
The original SVM algorithm can be used for classifying two classes using a linear boundary; however, it has been extended for the classification of multi-class data. The SVM classifier builds an optimum hyperplane with the largest margin—with the maximum distance from the nearest data to the decision boundary—that can distinguish the data with the highest accuracy. If the data is not linearly distinguishable, using nonlinear kernels, the data will be implicitly mapped into the higher dimension wherein the data is distinguishable with a linear boundary [44]. This paper uses the SVM classifier with a three-order polynomial kernel function to classify the neuronal spiking activity in fixation and memory periods.
The KNN algorithm classifies each new data based on voting between the class of k-closest observed data to the new sample. Therefore, to classify any new data, the KNN classifier needs to find the k-nearest training data by computing the distances of the new sample from all other samples in the parameter space. Therefore, although the KNN algorithm is simple to implement, it may be time-consuming due to its computational costs. This paper employs the KNN classifier with k=3 and standardized Euclidean distance to detect the presence of working memory using the firing rate data.
To assess the classification performance, different criteria are introduced in the literature [16]. Accuracy, the most well-known evaluation criterion for classification, is defined based on the number of samples correctly labeled by the classifier versus the number of samples (see Eq (19)). Sensitivity and specificity are more specific assessment criteria since they respectively show the performance of the classifier in detecting and not detecting the target class. Therefore, sensitivity is defined as the number of samples correctly labeled as the target class by the classifiers versus the actual number of samples in the target class (see Eq (20)). On the other hand, specificity is defined as the number of samples correctly labeled as the non-target class by the classifiers versus the actual number of samples in the non-target class (see Eq (21)).
Accuracy=tp+tntp+tn+fp+fn, | (19) |
Sensitivity=tptp+fn, | (20) |
Specificity=tntn+fp. | (21) |
Here, tp, tn, fp, and fn refer to the true positive, true negative, false positive, and false negative indexes.
Figure 2 shows the average normalized response of 131 MT neurons during the MGS task (see Methods). According to this figure, on average, no change in spiking activity is observed when comparing the response of neurons before and after the visual stimulus (i.e., fixation and memory periods, respectively). The inset bar graph in Figure 2 reveals no significant difference between the average spiking activity of MT neurons in memory vs. fixation periods in both IN and OUT memory conditions (pfixationINvs,memoryIN=0.385 and pfixationOUTvs,memoryOUT=0.385).
To detect the presence of the memory, we used the neural spiking activity in IN conditions during the fixation (when no working memory is involved) and memory (when deployment of top-down working memory signals is present) periods for feature extraction, selection, and classification. In the feature extraction step, 41 features are extracted from the IN conditions in fixation and memory periods. Therefore, the feature vector can be described as:
- Index 1–6 are the fractal-based features, including HFD, KFD, GHE, MSFD, LTFD, and FV.
- Index 7–34 are the transform-based features, including mean (index 7–10), variance (index 11–14), kurtosis (index 15–18), skewness (index 19–22), median (index 23–26), maximum (index 27–30), and minimum (index 31–34) of the components of DWT, DFT, DSTFT, and DST, respectively.
- Index 35–41 are statistical features, including mean, variance, kurtosis, skewness, median, maximum, and minimum values of the firing rate signals within the fixation and memory periods.
Finally, the selected features from the fixation IN and memory IN were classified using the SVM and KNN classifiers. Based on the selection method, classification was performed in four cases. To estimate the performance of the classifiers on the data, the k-fold cross-validation method (with k=10) and its balanced version called the A-test algorithm (with k=10 and ten iterations) were employed as the procedure for performing the classification. Unlike the K-fold cross-validation method, the A-test algorithm ensures that the same number of samples from each class is included in all training and testing folds. Therefore, the A-test method might be more reliable, especially when classes contain unequal data samples.
In the first case, all extracted features, including 41 linear and nonlinear features, were used for the classification step. The results of the classification performance are shown in Figure 3. The details of the results can be found in Table 1.
Classifier | Assessment criterion | K-fold (K=10) | A-test (K=10,iter=10) |
SVM | Accuracy | 98.85±2.60 | 98.77±0.16 |
sensitivity | 97.69±5.19 | 97.54±0.32 | |
specificity | 100±0 | 100±0 | |
KNN | Accuracy | 98.46±1.99 | 97.92±0.41 |
sensitivity | 97.69±3.72 | 97±0.67 | |
specificity | 99.23±2.43 | 98.85±0.54 |
Figure 3 shows that the SVM and KNN classifiers SVM and KNN classifiers performed closely (AccuracySVM=98.85±2.6; AccuracyKNN=98.46±1.99) in distinguishing fixation IN and memory IN data. However, SVM performed slightly better than KNN (AccuracySVM−AccuracyKNN<1%) in both cross-validation approaches. Table 1 reveals that the average accuracy is higher in the K-fold approach; however, the standard deviation mentioned in the A-test method is considerably lower. This shows that in different iterations, the average accuracy is not changed remarkably, and thus, the results are valid.
Here, 20 features were selected using the GA feature selection method, including three fractal-based features (including the HFD, the KFD, and the FV), 14 transform-based features (including the mean of DFT, the mean of DST, the variance of DFT, the variance of DSTFT, the variance of DST, the kurtosis of DWT, the kurtosis of DSTFT, the skewness of DFT, the skewness of DSTFT, the median of DWT, the median of DFT, the median of DST, the maximum of DWT, and the minimum DWT), and three statistical features (including the skewness, the median, and the minimum values). Figure 4 and Table 2 demonstrate the classification results using the 20 GA-based selected features.
Classifier | Assessment criterion | K-fold (K=10) | A-test (K=10,iter=10) |
SVM | Accuracy | 98.46±2.69 | 99.04±0.33 |
sensitivity | 96.92±5.38 | 98.08±0.65 | |
specificity | 100±0 | 100±0 | |
KNN | Accuracy | 99.23±1.62 | 99.19±0.12 |
sensitivity | 98.46±3.24 | 98.38±0.24 | |
specificity | 100±0 | 100±0 |
According to Figure 4, unlike Figure 3, the KNN classifier performed slightly more effectively than the SVM classifier (AccuracyKNN−AccuracySVM<1%). Table 2 shows that the GA-selected feature improved the classification performance, especially for the KNN classifier (AccuracyKNN=99.23±1.62). In addition, the KNN classifier has more reliable results since it generally has a lower standard deviation.
Employing the PSO feature selection method, 17 features were selected, including three fractal-based features (including the HFD, the KFD, and the FV), 12 transform-based features (including the mean of DFT, the mean of DST, the variance of DFT, the variance of DSTFT, the kurtosis of DWT, the kurtosis of DSTFT, the skewness of DFT, the median of DWT, the median of DFT, the median of DST, the maximum of DWT, the minimum of DWT), and two statistical features (including the variance, and the maximum values). Similar to the previous subsections, Figure 5 and Table 3 illustrate the classification performance considering the PSO-based selected features.
Classifier | Assessment criterion | K-fold (K=10) | A-test (K=10,iter=10) |
SVM | Accuracy | 99.23±2.43 | 99.50±0.26 |
sensitivity | 98.46±4.87 | 99.00±0.52 | |
specificity | 100±0 | 100±0 | |
KNN | Accuracy | 98.85±1.86 | 98.77±0.24 |
sensitivity | 97.69±3.72 | 97.77±0.24 | |
specificity | 100±0 | 99.77±0.37 |
In contrast to Figure 3, in this case, Figure 5 shows that the PSO-selected features helped the SVM classifier improve its performance (AccuracySVM=99.50±0.26) more considerably than the KNN classifier (Accuracy=98.85±1.86). Besides, Table 3 reveals that the KNN classifier has more reliable results due to the lower standard deviation on average.
Using the ACO algorithm for selecting the optimum features, a total of 20 features were selected, including three fractal-based features (including the HFD, the GHE, and the FV), 14 transform-based features (including the skewness of DSFTF, the variance of DWT, the kurtosis of DSTFT, the skewness of DWT, the kurtosis of DFT, the mean of DST, the skewness of DFT, the min of DFT, the mean of DWT, the variance of DSTFT, the variance of DST, the maximum of DFT), and five statistical features (including the skewness, the maximum, the variance, the kurtosis, and the minimum values). Figure 6, as well as Table 4, contains the results of performing the classification using the ACO-based selected features.
Classifier | Assessment criterion | K-fold (K=10) | A-test (K=10,iter=10) |
SVM | Accuracy | 99.23±1.62 | 99.42±0.20 |
sensitivity | 98.46±3.24 | 98.85±0.41 | |
specificity | 100±0 | 100±0 | |
KNN | Accuracy | 99.62±1.22 | 99.65±0.12 |
sensitivity | 99.23±2.43 | 99.31±0.24 | |
specificity | 100±0 | 100±0 |
Figure 6 illustrates that using the ACO-selected features, the average performance of both classifiers in detecting the presence of working memory is significantly enhanced compared to Figure 3, wherein all features were involved in the classification procedure (AccuracySVM=99.42±0.20; AccuracyKNN=99.65±0.12). From Table 4, it can be seen that the KNN classifier not only has slightly better performance (AccuracyKNN−AccuracySVM<0.5%) but also, due to the lower standard deviation, has a more reliable performance.
The brain is the most complex system in the human body. This complexity is reflected in the signals recorded from the brain. Thus, brain-associated data such as EEG or the spiking activity of neurons predominantly have nonlinear properties. This nonlinearity can be captured by FD, which is an index of complexity. To obtain FD of a time series, different algorithms have been proposed, such as HFD [30], KFD [31], GHE [32], MSFD [33], LTD [35], and FV [37]. However, transform-based features and statistical indexes are popular in signal processing. The main objective of this paper was to examine the ability of machine-learning methods to detect the presence of working memory using various linear and nonlinear features. Therefore, we used several algorithms to obtain the FD value (including HFD, KFD, GHE, MSFD, LTFD, and FV) and different transforms to obtain the frequency and/or time-frequency components (including statistical measures of DWT, DFT, DSTFT, and DST) of the average spiking activity of MT neurons. Also, we included some of the important statistical indexes (including mean, variance, kurtosis, skewness, median, maximum, and minimum values) in the feature set.
Selecting the optimum features can be an optimization problem that, in most cases, results in improving the classifiers' performance. Such methods mainly focus on finding the best set of features that can lead to the best classification result. In this way, it can also help reduce the feature space's dimensionality and simplify the classification problem. Hence, after performing the classification with all extracted features, we examined whether the feature selection method could enhance the classification performance. Accordingly, we used GA, PSO, and ACO algorithms to select the optimum features for detecting the presence of working memory. In the classification step, the ability of two machine-learning algorithms, namely SVM and KNN classifiers, were employed to detect the presence of memory. KNN is typically considered a nonlinear classifier as it has a nonlinear decision boundary whilst SVM can be a nonlinear classifier if it uses a nonlinear kernel function. In general, the nonlinearity of the decision boundary enables a classifier to learn and distinguish the data classes more precisely and define the membership's probability to each data class for new data. For this reason, we used SVM with a three-order polynomial kernel function and KNN with three nearest neighbors. It should be noted that cross-validation methods can determine the validity of classification results, particularly when the number of samples is not too high. Therefore, K-fold (with 10 folds) and A-test (with 10 folds and 10 iterations) cross-validation methods were performed to show the results' robustness.
The best classification performance of the SVM and KNN classifiers is summarized in Figure 7. According to this figure, when no selection algorithms were employed, the SVM classifiers led to the best performance (AccuracySVM=98.85±2.60). The same result can be seen in the case where the PSO algorithm was used to select the features (AccuracySVM=99.50±0.26). In contrast, when the GA- and ACO-selected features were used as the input of the classification algorithms, the KNN classifier reached higher average accuracy (AccuracyKNN=99.23±1.62 and 99.65±0.12, respectively). Moreover, Figure 7 shows that employing the selection method can improve the classification performance since, in all cases, the average accuracy was grown compared to the case where no selecting method was employed.
In total, Figure 7 reveals that when the features were selected using the ACO algorithm, the KNN classifier could detect the presence of working memory with the accuracy of 99.65% and the standard deviation of 0.12 (using the A-test cross-validation method), which is the highest obtained average accuracy among the studied cases. In this case, three out of six FD-base features (including the HFD, the GHE, and the FV), 14 out of 28 transform-based features (including the skewness of DSFTF, the variance of DWT, the kurtosis of DSTFT, the skewness of DWT, the kurtosis of DFT, the mean of DST, the skewness of DFT, the min of DFT, the mean of DWT, the variance of DSTFT, the variance of DST, the maximum of DFT), and five out of seven statistical-based features (including the skewness, the maximum, the variance, the kurtosis, and the minimum values) were involved in the classification procedures.
To obtain the neural code for spatial working memory represented in the firing rate of visual neurons, we compared the neural responses of MT neurons during the memory period (where the monkey is actively memorizing a location) with the neural activity during the fixation period (when no working memory is present). This approach could be questioned as one can argue that any differences between the neural responses during memory and fixation periods could occur due to other cognitive signals such as arousal or expectation than spatial working memory. Here we review a series of neurophysiological evidence revealing the dependence of these response changes (i.e., the differences of neural responses between memory and fixation period) on the content of working memory:
- By measuring the neural responses of extrastriate neurons to visual probes presented during fixation and memory periods, a recent study showed a strong modulation of RF profile in V4 and MT neurons that were dependent on the content of working memory [19]. It was shown that during the maintenance of spatial information, only the neurons whose RFs during the fixation period were close to the remembered location, expanded and shifted their RFs towards that location during the memory period.
- It was also found that the encoding of the visual probe's location by the population of MT neurons was enhanced during the memory period compared to the fixation period. In detail, this was measured by the ability of MT individual neurons' firing activity to discriminate two different visual probes (two-point discriminability). In fact, this memory-related enhancement in two-point discriminability only occurred to those visual probes that were presented near the locus of working memory.
- At the level of LFP, it has been shown that the amount of information regarding the visual input conveyed by the alpha-beta phase of spike times increases during the memory period compared to the fixation period. This phenomenon only occurred to those visual probes which were presented near the remembered location.
- Furthermore, it was shown that the discrimination between visual probes, based on the phase of each spike in the alpha-beta frequency range, is enhanced during the memory period compared to fixation. This discrimination enhancement was observed for the visual probes presented near the locus of working memory.
As the differences between the neural responses during memory and fixation periods occur in a spatial-specific manner (i.e., near the locus of working memory), it would be very unlikely to relate these response changes to any other cognitive signals, such as arousal, than spatial working memory.
Yaser Merrikhi collected this dataset in the laboratory of Dr. Behrad Noudoost at Montana State University, Bozeman, MT, USA. The lab's experiments were supported by MSU start-up fund, Whitehall 2014-5-18, NIH R01EY026924, and NSF143221 and 1632738 grants to Dr. Noudoost. We would like to thank Dr. Noudoost for sharing this dataset. This work is funded by the Centre for Nonlinear Systems, Chennai Institute of Technology, India, vide funding number CIT/CNS/2022/RP-006.
The authors declare there is no conflict of interest.
[1] | Aktan B, Ozturk M, Rhaeim N, et al. (2009) Wavelet-Based Systematic Risk Estimation An Application on Istanbul Stock Exchange. Int Res J Financ Econ 23: 34-45. |
[2] | Arfaoui S, Rezgui I, Mabrouk AB (2017) Wavelet Analysis On The Sphere, Spheroidal Wavelets, Degryuter, Degruyter, 2017, ISBN 978-3-11-048188-4. |
[3] | Arfaoui S, Mabrouk AB, Cattani C (2020) New type of Gegenbauer-Hermite monogenic polynomials and associated Clifford wavelets. J Math Imaging Vision 62: 73-97. |
[4] | Arfaoui S, Mabrouk AB, Cattani C (2020) New type of Gegenbauer-Jacobi-Hermite monogenic polynomials and associated continuous Clifford wavelet transform. Acta Applicandea Math. https://doi.org/10.1007/s10440-020-00322-0. |
[5] | Aydogan K (1989) Portfolio Theory, Capital Market Board of Turkey Research Report: Ankara. |
[6] | Banz RW (1981) The relationship between return and market value of common stock. J Financ Econ 9: 3-18. |
[7] | Basu S (1977) The relationship between earnings' yield, market value and return for NYSE common stocks-further evidence. J Financ 32: 663-681. |
[8] | Mabrouk AB, Mohamed MLB, Omrani K (2008) Numerical solutions for PDEs modeling binary alloy-solidification dynamics, In: Proceedings of 2007 International Symposium on Nonlinear Dynamics, J Phys, conference series 96 (2008) 012067. |
[9] | Mabrouk AB, Kortass H, Ammou SB (2008) Wavelet Estimators for Long Memory in Stock Markets. Int J Theor Appl Financ 12: 297-317. |
[10] | Mabrouk AB, Kahloul I, Hallara SE (2010) Wavelet-Based Prediction for Governance, Diversification and Value Creation Variables. Int Res J Financ Econ 60: 15-28. |
[11] | Mabrouk AB, Abdallah NB, Hamrita ME (2011) A wavelet method coupled with quasi self similar stochastic processes for time series approximation. Int J Wavelets Multiresolution Inf Process 9: 685-711. |
[12] | Mabrouk AB, Zaafrane O (2013) Wavelet Fuzzy Hybrid Model For Physico Financial Signals. J Appl Stat 40: 1453-1463. |
[13] | Mabrouk AB, Rabbouch B, Saadaoui F (2015) A wavelet based methodology for predicting transmembrane segments, In: Poster Session, The International Conference of Engineering Sciences for Biology and Medecine, 1-3 May 2015, Monastir, Tunisie. |
[14] | Black F, Jensen MC, Scholes M(1972) The capital asset pricing model: Some empirical tests, in Jensen MC (ed), Studies in the theory of Capital, New York: Praeger, 1-54. |
[15] | Black F (1972) Capital Market Equilibrium with Restricted Borrowing. J Bus 45: 444-455. |
[16] | Breeden D (1979) An Intertemporal Asset Pricing Model with Stochastic Consumption and Investment Opportunities. J Financ Econ 73: 265-296. |
[17] | Brennan MJ (1973) Taxes, market valuation and corporate financial policy. Natl tax J 23: 417-427. |
[18] | Chae J, Yang C (2008) Which idiosyncratic factors can explain the pricing errors from asset pricing models in the Korean stock market? Asia-Pasific J Financ Stud 37: 297-342. |
[19] | Chan LK, Lakonishok J (1993) Are the reports of beta's death premature? J Portf Manage 19: 51-62. |
[20] | Cifter A, Ozun A (2007) Multiscale systematic risk: An application on ISE 30, MPRA Paper 2484, University Library of Munich: Germany. |
[21] | Cifter A, Ozun A (2008) A signal processing model for time series analysis: The effect of international F/X markets on domestic currencies using wavelet networks. Int Rev Electr Eng 3: 580-591. |
[22] | Cohen K, Hawawin G, Mayer S, et al. (1986) The Microstructure of Securities Markets, Prentice-Hall: Sydney. |
[23] | Conlon T, Crane M, Ruskin HJ (2008) Wavelet multiscale analysis for hedge funds: Scaling and strategies. Phys A 387: 5197-5204. |
[24] | Daubechies I (1992) Ten Lectures on Wavelets, Society for Industrial and Applied Mathematics, Philadelphia. |
[25] | Desmoulins-Lebeault F (2003) Distribution of Returns and the CAPM Empirical Problems. Post-Print halshs-00165099, HAL. |
[26] | DiSario R, Saroglu H, McCarthy J, et al. (2008) Long memory in the volatility of an emerging equity market: The case of Turkey. Int Mark Inst Money 18: 305-312. |
[27] | Fama EF (1970) Efficient capital markets: A review of theory and empirical work. J Financ 25: 383-417. |
[28] | Fama E, French K (1992) The Cross-section of Expected Stock Returns. J Financ 47: 427-465. |
[29] | Fama E, French K (1993) Common risk factors in returns on stocks and bonds. J Financ Econ 33: 3-56. |
[30] | Fama E, French KR (1996) The CAPM is Wanted, Dead or Alive. J Financ 51: 1947-1958. |
[31] | Fama E, French KR (2004) The Capital Asset Pricing Model: Theory and Evidence. J Econo Perspect 18: 25-46. |
[32] | Fama E, French KR (2006) The Value Premium and the CAPM. J Financ 61: 2163-2185. |
[33] | Fama E, MacBeth J (1973) Risk, return and equilibrium: Empirical tests. J Polit Econ 81: 607-636. |
[34] | Fernandez V (2006) The CAPM and value at risk at different time-scales. Int Rev Financ Anal 15: 203-219. |
[35] | Friend L, Landskroner Y, Losq E (1976) The demand for risky assets and uncertain inflation. J Financ 31: 1287-1297. |
[36] | Galagedera DUA (2007) A review of capital asset pricing models. Managerial Financ 33: 821-832. |
[37] | Gençay R, Selçuk F, Whitcher B (2002) An Introduction to Wavelets and Other Filtering Methods in Finance and Economics, Academic Press, San Diego. |
[38] | Gençay R, Whitcher B, Selçuk F (2003) Systematic Risk and Time Scales. Quant Financ 3: 108-116. |
[39] | Gençay R, Whitcher B, Selçuk F (2005) Multiscale systematic risk. J Inte Money Financ 24: 55-70. |
[40] | Gibbons MR (1982) Multivariate tests of financial models: A new approach. J Financ Econ 10: 3-27. |
[41] | Gursoy CT, Rejepova G (2007) Test of capital asset pricing model in Turkey. J Dogus Univ 8: 47-58. |
[42] | Handa P, Kothari SP, Wasley C (1989) The relation between the return interval and beta: Implications for size-effect. J Financ Econ 23: 79-100. |
[43] | Handa P, Kothari SP, Wasley C (1993) Sensitivity of multivariate tests of the CAPM to the return measurement interval. J Financ 48: 1543-1551. |
[44] | Ho YW, Strange R, Piesse J (2000) CAPM anomalies and the pricing of equity: Evidence from the Hong Kong market. Appl Econ 32: 1629-1636. |
[45] | Hubbard BB (1998) The world according to wavelets: The story of a mathematical technique in the making, 2e, Ak Peters Ltd., MA. |
[46] | Mahmoud IMM, Ben Mabrouk A, Hashim MHA (2016) Wavelet multifractal models for transmembrane proteins' series. Int J Wavelets Multires Inf Process 14: 36. |
[47] | In F, Kim S (2006) The hedge ratio and the empirical relationship between the stock and futures markets: A new approach using wavelet analysis. J Bus 79: 799-820. |
[48] | In F, Kim S (2007) A note on the relationship between Fama-French risk factors and innovations of ICAPM state variables. Financ Res Lett 4: 165-171. |
[49] | In F, Kim S, Marisetty V, et al. (2008) Analysing the performance of managed funds using the wavelet multiscaling method. Rev Quant Financ Accounting 31: 55-70. |
[50] | Karan MB, Karadagli E (2001) Risk return and market equilibrium in Istanbul stock exchange: The test of the capital asset pricing model. J Econ Administrative Sci 19: 165-177. |
[51] | Kishor NK, Marfatia HA (2013) The time-varying response of foreign stock markets to US monetary policy surprises: Evidence from the Federal funds futures market. J Int Financ Mark Inst Money 24: 1-24. |
[52] | Kothari S, Shanken J (1998) On defense of beta, J. Stern, and D. Chew, Jr. (Eds.), The Revolution in Corporate Finance, 3e, Blackwell Publishers Inc., 52-57. |
[53] | Levhari D, Levy H (1977) The Capital Asset Pricing Model and the Investment Horizon. Rev Econ Stat 59: 92-104. |
[54] | Lévy H (1978) Equilibrium in an imperfect market: a constraint on the number of securities in the portfolio. Am Econ Rev 68: 643-658. |
[55] | Lintner J (1965a) Security Prices and Maximal Gaines from Diversification. J Financ 20: 587-615. |
[56] | Lintner J (1965b) The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets. Rev Econ Stat 47: 13-37. |
[57] | Litzenberger RH, Ramaswamy K (1979) The effect of personal taxes and dividends on capital asset prices: Theory and empirical evidence. J Financ Econ 7: 163-195. |
[58] | Magni CA (2007a) Project selection and equivalent CAPM-based investment criteria. Appl Financ Econ Lett 3: 165-168. |
[59] | Magni CA (2007b) Project valuation and investment decisions: CAPM versus arbitrage. Appl Financ Econ Lett 3: 137-140. |
[60] | Marfatia HA (2014) Impact of uncertainty on high frequency response of the US stock markets to the Fed's policy surprises. Q Rev Econ Financ 54: 382-392. |
[61] | Marfatia HA (2015) Monetary policy's time-varying impact on the US bond markets: Role of financial stress and risks. North Am J Econ Financ 34: 103-123. |
[62] | Marfatia HA (2017a) A fresh look at integration of risks in the international stock markets: A wavelet approach. Rev Financ Econ 34: 33-49. |
[63] | Marfatia HA (2017b) Wavelet Linkages of Global Housing Markets and macroeconomy. Available at SSRN 3169424. |
[64] | Marfatia HA (2020) Investors' Risk Perceptions in the US and Global Stock Market Integration. Res Int Bus Financ 52: 101169. |
[65] | Markowitz H (1952) Portfolio Selection. J Financ 7: 77-91. |
[66] | Merton RC (1973) An Intertermporal Capital Asset Pricing Model. Econometrica: J Econometric Society 41: 867-887. |
[67] | Mossin I (1966) Equilibrium in a Capital Asset Market. Econometrica: J Econometric Society 34: 768-783. |
[68] | Perold A (2004) The Capital Asset Pricing Model. J Econ Perspect 18: 3-24. |
[69] | Percival DB, Walden AT (2000) Wavelet methods for time series analysis, Camridge University Press, NY. |
[70] | Rhaiem R, Ammou SB, Mabrouk AB (2007a) Estimation of the systematic risk at different time scales: Application to French stock market. Int J Appl Econ Financ 1: 79-87. |
[71] | Rhaiem R, Ammou SB, Mabrouk AB (2007b) Wavelet estimation of systematic risk at different time scales, Application to French stock markets. Int J Appl Econ Financ 1: 113-119. |
[72] | Roll R (1977) A critique of the asset pricing theory's tests Part I: On past and potential testability of the theory. J Financ Econ 4: 129-176. |
[73] | Selcuk F (2005) Wavelets: A new analysis method (in Turkish). Bilkent J 3: 12-14. |
[74] | Sharkasi A, Crane M, Ruskin HJ, et al.(2006) The reaction of stock markets to crashes and events: A comparison study between emerging and mature markets using wavelet transforms. Phys A 368: 511-521. |
[75] | Sharpe WF (1964) Capital asset prices: A theory of market equilibrium under conditions of risk. J Financ 19: 425-442. |
[76] | Sharpe WF (1970a) Computer-Assisted Economics. J Financ Quant Anal, 353-366. |
[77] | Sharpe WF (1970b) Stock market price behavior. A discussion. J Financ 25: 418-420. |
[78] | Sharpe WF (1970c) Portfolio theory and capital markets, McGraw-Hill College. |
[79] | Soltani S (2002) On the use of the wavelet decomposition for time series prediction. Neurocomput 48: 267-277. |
[80] | Soltani S, Modarres R, Eslamian SS (2007) The use of time series modeling for the determination of rainfall climates of Iran. Int J Climatol 27: 819-829. |
[81] | Vasichek AA, McQuown JA (1972) Le modèle de marché efficace. Analyse financière, 15, 1973, traduit de "The effecient market model". Financ Anal J. |
[82] | Xiong X, Zhang X, Zhang W, et al. (2005) Wavelet-based beta estimation of China stock market, In: Proceedings of 4th International Conference on Machine Learning and Cybernetic, Guangzhou. IEEE: 0-7803-9091-1. |
[83] | Yamada H (2005) Wavelet-based beta estimation and Japanese industrial stock prices. Appl Econ Lett 12: 85-88. |
[84] | Zemni M, Jallouli M, Mabrouk AB, et al. (2019a) Explicit Haar-Schauder multiwavelet filters and algorithms. Part II: Relative entropy-based estimation for optimal modeling of biomedical signals. Int J Wavelets Multiresolution Inf Process 17: 1950038. |
[85] | Zemni M, Jallouli M, Mabrouk AB, et al. (2019b) ECG Signal Processing with Haar-Schauder Multiwavelet, In: Proceedings of the 9th International Conference on Information Systems and Technologies—Icist 2019. |
![]() |
![]() |
1. | Feng Qiu, Hui Xu, Fukui Li, Applying modified golden jackal optimization to intrusion detection for Software-Defined Networking, 2023, 32, 2688-1594, 418, 10.3934/era.2024021 |
Classifier | Assessment criterion | K-fold (K=10) | A-test (K=10,iter=10) |
SVM | Accuracy | 98.85±2.60 | 98.77±0.16 |
sensitivity | 97.69±5.19 | 97.54±0.32 | |
specificity | 100±0 | 100±0 | |
KNN | Accuracy | 98.46±1.99 | 97.92±0.41 |
sensitivity | 97.69±3.72 | 97±0.67 | |
specificity | 99.23±2.43 | 98.85±0.54 |
Classifier | Assessment criterion | K-fold (K=10) | A-test (K=10,iter=10) |
SVM | Accuracy | 98.46±2.69 | 99.04±0.33 |
sensitivity | 96.92±5.38 | 98.08±0.65 | |
specificity | 100±0 | 100±0 | |
KNN | Accuracy | 99.23±1.62 | 99.19±0.12 |
sensitivity | 98.46±3.24 | 98.38±0.24 | |
specificity | 100±0 | 100±0 |
Classifier | Assessment criterion | K-fold (K=10) | A-test (K=10,iter=10) |
SVM | Accuracy | 99.23±2.43 | 99.50±0.26 |
sensitivity | 98.46±4.87 | 99.00±0.52 | |
specificity | 100±0 | 100±0 | |
KNN | Accuracy | 98.85±1.86 | 98.77±0.24 |
sensitivity | 97.69±3.72 | 97.77±0.24 | |
specificity | 100±0 | 99.77±0.37 |
Classifier | Assessment criterion | K-fold (K=10) | A-test (K=10,iter=10) |
SVM | Accuracy | 99.23±1.62 | 99.42±0.20 |
sensitivity | 98.46±3.24 | 98.85±0.41 | |
specificity | 100±0 | 100±0 | |
KNN | Accuracy | 99.62±1.22 | 99.65±0.12 |
sensitivity | 99.23±2.43 | 99.31±0.24 | |
specificity | 100±0 | 100±0 |
Classifier | Assessment criterion | K-fold (K=10) | A-test (K=10,iter=10) |
SVM | Accuracy | 98.85±2.60 | 98.77±0.16 |
sensitivity | 97.69±5.19 | 97.54±0.32 | |
specificity | 100±0 | 100±0 | |
KNN | Accuracy | 98.46±1.99 | 97.92±0.41 |
sensitivity | 97.69±3.72 | 97±0.67 | |
specificity | 99.23±2.43 | 98.85±0.54 |
Classifier | Assessment criterion | K-fold (K=10) | A-test (K=10,iter=10) |
SVM | Accuracy | 98.46±2.69 | 99.04±0.33 |
sensitivity | 96.92±5.38 | 98.08±0.65 | |
specificity | 100±0 | 100±0 | |
KNN | Accuracy | 99.23±1.62 | 99.19±0.12 |
sensitivity | 98.46±3.24 | 98.38±0.24 | |
specificity | 100±0 | 100±0 |
Classifier | Assessment criterion | K-fold (K=10) | A-test (K=10,iter=10) |
SVM | Accuracy | 99.23±2.43 | 99.50±0.26 |
sensitivity | 98.46±4.87 | 99.00±0.52 | |
specificity | 100±0 | 100±0 | |
KNN | Accuracy | 98.85±1.86 | 98.77±0.24 |
sensitivity | 97.69±3.72 | 97.77±0.24 | |
specificity | 100±0 | 99.77±0.37 |
Classifier | Assessment criterion | K-fold (K=10) | A-test (K=10,iter=10) |
SVM | Accuracy | 99.23±1.62 | 99.42±0.20 |
sensitivity | 98.46±3.24 | 98.85±0.41 | |
specificity | 100±0 | 100±0 | |
KNN | Accuracy | 99.62±1.22 | 99.65±0.12 |
sensitivity | 99.23±2.43 | 99.31±0.24 | |
specificity | 100±0 | 100±0 |