
Infectious diseases have been one of the major causes of human mortality, and mathematical models have been playing significant roles in understanding the spread mechanism and controlling contagious diseases. In this paper, we propose a delayed SEIR epidemic model with intervention strategies and recovery under the low availability of resources. Non-delayed and delayed models both possess two equilibria: the disease-free equilibrium and the endemic equilibrium. When the basic reproduction number R0=1, the non-delayed system undergoes a transcritical bifurcation. For the delayed system, we incorporate two important time delays: τ1 represents the latent period of the intervention strategies, and τ2 represents the period for curing the infected individuals. Time delays change the system dynamics via Hopf-bifurcation and oscillations. The direction and stability of delay induced Hopf-bifurcation are established using normal form theory and center manifold theorem. Furthermore, we rigorously prove that local Hopf bifurcation implies global Hopf bifurcation. Stability switching curves and crossing directions are analyzed on the two delay parameter plane, which allows both delays varying simultaneously. Numerical results demonstrate that by increasing the intervention strength, the infection level decays; by increasing the limitation of treatment, the infection level increases. Our quantitative observations can be useful for exploring the relative importance of intervention and medical resources. As a timing application, we parameterize the model for COVID-19 in Spain and Italy. With strict intervention policies, the infection numbers would have been greatly reduced in the early phase of COVID-19 in Spain and Italy. We also show that reducing the time delays in intervention and recovery would have decreased the total number of cases in the early phase of COVID-19 in Spain and Italy. Our work highlights the necessity to consider the time delays in intervention and recovery in an epidemic model.
Citation: Sarita Bugalia, Jai Prakash Tripathi, Hao Wang. Mathematical modeling of intervention and low medical resource availability with delays: Applications to COVID-19 outbreaks in Spain and Italy[J]. Mathematical Biosciences and Engineering, 2021, 18(5): 5865-5920. doi: 10.3934/mbe.2021295
[1] | Ansheng Ye, Xiangbing Zhou, Kai Weng, Yu Gong, Fang Miao, Huimin Zhao . Image classification of hyperspectral remote sensing using semi-supervised learning algorithm. Mathematical Biosciences and Engineering, 2023, 20(6): 11502-11527. doi: 10.3934/mbe.2023510 |
[2] | Wenjun Xu, Zihao Zhao, Hongwei Zhang, Minglei Hu, Ning Yang, Hui Wang, Chao Wang, Jun Jiao, Lichuan Gu . Deep neural learning based protein function prediction. Mathematical Biosciences and Engineering, 2022, 19(3): 2471-2488. doi: 10.3934/mbe.2022114 |
[3] | Lili Jiang, Sirong Chen, Yuanhui Wu, Da Zhou, Lihua Duan . Prediction of coronary heart disease in gout patients using machine learning models. Mathematical Biosciences and Engineering, 2023, 20(3): 4574-4591. doi: 10.3934/mbe.2023212 |
[4] | Shun Li, Lu Yuan, Yuming Ma, Yihui Liu . WG-ICRN: Protein 8-state secondary structure prediction based on Wasserstein generative adversarial networks and residual networks with Inception modules. Mathematical Biosciences and Engineering, 2023, 20(5): 7721-7737. doi: 10.3934/mbe.2023333 |
[5] | Xiaoman Zhao, Xue Wang, Zhou Jin, Rujing Wang . A normalized differential sequence feature encoding method based on amino acid sequences. Mathematical Biosciences and Engineering, 2023, 20(8): 14734-14755. doi: 10.3934/mbe.2023659 |
[6] | Mingju Chen, Zhongxiao Lan, Zhengxu Duan, Sihang Yi, Qin Su . HDS-YOLOv5: An improved safety harness hook detection algorithm based on YOLOv5s. Mathematical Biosciences and Engineering, 2023, 20(8): 15476-15495. doi: 10.3934/mbe.2023691 |
[7] | Jingren Niu, Qing Tan, Xiufen Zou, Suoqin Jin . Accurate prediction of glioma grades from radiomics using a multi-filter and multi-objective-based method. Mathematical Biosciences and Engineering, 2023, 20(2): 2890-2907. doi: 10.3934/mbe.2023136 |
[8] | Natalya Shakhovska, Vitaliy Yakovyna, Valentyna Chopyak . A new hybrid ensemble machine-learning model for severity risk assessment and post-COVID prediction system. Mathematical Biosciences and Engineering, 2022, 19(6): 6102-6123. doi: 10.3934/mbe.2022285 |
[9] | Lu Yuan, Yuming Ma, Yihui Liu . Protein secondary structure prediction based on Wasserstein generative adversarial networks and temporal convolutional networks with convolutional block attention modules. Mathematical Biosciences and Engineering, 2023, 20(2): 2203-2218. doi: 10.3934/mbe.2023102 |
[10] | Hong Yuan, Jing Huang, Jin Li . Protein-ligand binding affinity prediction model based on graph attention network. Mathematical Biosciences and Engineering, 2021, 18(6): 9148-9162. doi: 10.3934/mbe.2021451 |
Infectious diseases have been one of the major causes of human mortality, and mathematical models have been playing significant roles in understanding the spread mechanism and controlling contagious diseases. In this paper, we propose a delayed SEIR epidemic model with intervention strategies and recovery under the low availability of resources. Non-delayed and delayed models both possess two equilibria: the disease-free equilibrium and the endemic equilibrium. When the basic reproduction number R0=1, the non-delayed system undergoes a transcritical bifurcation. For the delayed system, we incorporate two important time delays: τ1 represents the latent period of the intervention strategies, and τ2 represents the period for curing the infected individuals. Time delays change the system dynamics via Hopf-bifurcation and oscillations. The direction and stability of delay induced Hopf-bifurcation are established using normal form theory and center manifold theorem. Furthermore, we rigorously prove that local Hopf bifurcation implies global Hopf bifurcation. Stability switching curves and crossing directions are analyzed on the two delay parameter plane, which allows both delays varying simultaneously. Numerical results demonstrate that by increasing the intervention strength, the infection level decays; by increasing the limitation of treatment, the infection level increases. Our quantitative observations can be useful for exploring the relative importance of intervention and medical resources. As a timing application, we parameterize the model for COVID-19 in Spain and Italy. With strict intervention policies, the infection numbers would have been greatly reduced in the early phase of COVID-19 in Spain and Italy. We also show that reducing the time delays in intervention and recovery would have decreased the total number of cases in the early phase of COVID-19 in Spain and Italy. Our work highlights the necessity to consider the time delays in intervention and recovery in an epidemic model.
Neurotoxins are toxic substances that are destructive to nerve tissue, such as AF64A, 6-hydroxydopamine, and kainic acid. In principle, the toxins act on the ion channels in the nerve-muscle junction, destroying cholinergic neurons, inhibiting the release of acetylcholine, blocking nerve-muscle conduction, causing muscle weakness, and thus making the muscle unable to contract. In severe cases, these toxins can cause suffocation and death. Neurotoxins can be classified into presynaptic and postsynaptic types according to their mechanism of action [1]. Presynaptic neurotoxins mainly act on the presynaptic membrane [2]; due to the specificity of enzyme activity, they typically block neuromuscular transmission and inhibit the release of neurotransmitters. The targets of postsynaptic neurotoxins are located in the postsynaptic membrane and can bind to acetylcholine receptors [3]. For example, β-methylamino-L-alanine, also known as BMAA, can damage motor neurons and has been implicated in Parkinson's syndrome. Cobra neurotoxin is a short-chain neurotoxin, the most important lethal component in cobra venom, a mainly postsynaptic neurotoxin. Because cobra venom neurotoxin is nonaddictive and non-drug resistant, it has broad prospects for persons with a drug addiction in detoxification. Therefore, the study of presynaptic and postsynaptic neurotoxins will contribute to the development of medicine, for example, to provide important clues for drug design [4,5,6].
Neurotoxins are a type of protein, and while their structure and function can be correctly predicted through biochemical experiments, the work is time-consuming and expensive [7,8,9,10]. In the genome era, many biological sequences are available [11], giving us a variety of methods to predict protein structure and function [12,13,14,15]. The key to correctly predicting protein structure and function is how to analyze these features using computational methods. Therefore, we can use machine learning methods for protein type prediction [16]. Generally, the use of machine learning to predict biological sequences mainly includes the following steps: feature extraction, model construction, and performance evaluation [17,18,19,20,21,22]. In 2009, a diversity-based method of identifying presynaptic and postsynaptic neurotoxins was proposed. The algorithm is based on the composition of amino acids and pseudo-amino acids [23]. To further improve the accuracy of prediction, Hua Tang et al. proposed a new feature selection technique based on the principle of variance analysis (ANOVA) [7,24]. In this article, we constructed a predictive model, Neu_LR, to correctly identify presynaptic and postsynaptic neurotoxins. The monoMonokGap method was used to extract the frequency characteristics of presynaptic and postsynaptic neurotoxin sequences and carry out feature selection. Then, based on the important features obtained after dimensionality reduction, the logistic regression algorithm is used to construct the prediction model Neu_LR.
As the effectiveness of machine learning technology has been continuously verified in recent years, the prediction of protein classification using machine learning-related technology has become a new research category [25,26]. The key to protein classification prediction with the help of machine learning technology lies in data processing and classification algorithms. The general prediction process is to first use the algorithm to extract the features of the protein and then use different classifiers to predict the protein. Therefore, the effective combination of feature extraction algorithms and classifiers has been extensively studied [27,28,29,30,31].
The research contents of this study include. We first download the presynaptic and postsynaptic neurotoxins from the UniProt database, and then the monoMonokGap feature expression algorithm is used to extract and select features from the data set to get the optimal features. Second, the feature vector obtained by dimensionality reduction was taken as the input, and the model was built using a logistic regression algorithm, and ten-fold cross-validation and independent test set validation was carried out. Figure 1 shows the flow chart of building the model in this paper [32,33].
High-quality data sets are the basis for building reliable and accurate models [34,35]. The UniProt database provides the scientific community with a single, centralized, authoritative source of protein sequence and functional information [36]. The data set used in this article is also applied to the research of Hua Tang et al. A total of 91 presynaptic and 165 postsynaptic neurotoxins were downloaded from the UniProt database. Since fuzzy information will reduce the quality of the benchmark data set and will cause the predicted model to become unreliable, we must eliminate unknown residues in the protein sequence (such as "X", "Z", "J", "O" and "B"). Because of the highly similar protein sequences in the data set, the results can be overestimated; therefore, the cut-off value of sequence identity is set to 80%. According to the results of the above screening, our data set contains 90 presynaptic neurotoxins, 165 postsynaptic neurotoxins and a total of 255 types of neurotoxin samples can be expressed by the following formula:
S=Spr∪Spo |
where subset Spr is a collection of 90 presynaptic neurotoxins and Spo is a collection of 165 postsynaptic neurotoxins.
Each protein sequence can be expressed by the following formula:
R=r1r2r3⋯rL |
R stands for protein sequence, ri stands for representative residue, and L is the length of the protein sequence. Since some machine learning methods cannot directly learn R, the protein sequences have to be converted to fixed-length vectors [37].
As the first step in building a biological sequence analysis model, feature extraction is an important part of the correct prediction of protein sequences. Generally, we can use the feature extraction method to convert the input neurotoxin sample order into a fixed-length digital vector and then use MRMD2.0 to reduce the dimensionality of the obtained feature vector as needed. Finally, we use the reduced dimensionality vector as the input vector of the classifier model and classify and process the result [38].
The monoMonoKGap is a feature extraction method, and the best features can be generated from a large number of previously generated features, monoMonoKGap considers kGap in the nucleotide sub-sequence, frequencies of these sub-sequences are treated as prediction features. It can be used for feature extraction of DNA sequences, RNA sequences, and protein sequences [39]. The selection range of kGap in DNA sequences and RNA sequences is 1 to 5, and the selection range of kGap in protein sequences is 1 to 10. When the kGap value is small, the formation of the feature set is small, and the frequency of occurrence of these features retains the partial or short information sequence order, and when the kGap value is moderately large, more features will be generated to retain long sequence information [40]. Specifically, when kGap = 1, the sequence can be encoded as frequencies of X_X, Simultaneously generate 4×1×4 dimensional features, or 20×1×20 dimensional features, when kGap = 2, generate 4×2×4 dimensional features, or 20×2×20 dimensional features and so on [41]. That is when kGap = n, the DNA sequence and RNA sequence will produce 4×4×n features, and the protein sequence will produce 20×20×n features. The generated feature format is as follows:
when kGap = 1, the characteristic structure is X_X;
when kGap = 2, the characteristic structure is X_X and X__X; and so on.
where X is defined as:
X={{A,C,G,T},{A,C,G,U}{A,C,D,E,F,G,H,I,K,L,M,N,P,Q,R,S,T,V,W,Y} |
This is followed by the DNA sequence, RNA sequence, and protein sequence.
For feature selection, to reduce the negative impact of dimensionality and to maintain information features, the AdaBoost classification model can be used to calculate the average impurity reduction. AdaBoost is a popular boosting classification algorithm in data mining. The core idea is to train multiple weak classifiers on the same training set, set these weak classifiers, and finally build a strong classifier [42]. Since Freund and Schapire proposed the AdaBoost algorithm [43,44,45], the improvement of AdaBoost has mainly involved two aspects: 1) adjusting the weight of the weak classifier in a new way and 2) improving the training method to reduce the error rate of classifier or save the training time.
Each residue in the protein sequence has many physical and chemical properties, so the protein sequence can be regarded as a time series with corresponding properties [46]. PSI-BIAST is run to compare the data set and parameters to generate the outline of each sequence.
AC-PSSM [47] uses PSI-BIAST [48] as a search tool. PSI-BIAST provides a means to detect distant relationships between proteins. It is a protein sequence profile search method used to compare more sensitive protein sequences to other protein sequences. The database used is the nr library.
AC-PSSM can convert PSSMs of different lengths into vectors of fixed length. AC measures the correlation between two residues with the same property, which can be expressed as:
{AC(i,lag)=L−lag∑j=1(Si,j−ˉSi)(Si,j+lag−ˉSi)/(L−lag)ˉSi=L∑j=1Si,j/L | (1) |
where i is one of the residues, L is the length of the protein sequence, Si,j is the PSSM fraction of amino acid i at the jth position, and ˉSi is the average fraction of amino acid i in the entire protein sequence. In this way, the number of ACs can be calculated by 20×LAG, where LAG is the maximum hysteresis, where the value of lag is all integers from 1 to LAG. In this article, we set the value of LAG to the default value of 2, that is, the maximum hysteresis is 2. Figure 2 shows the flowchart for generating AC-PSSM when the hysteresis is 2. That is, when LAG = 2, AC-PSSM can generate 40 columns of feature vectors. The first 20 columns represent the characteristics of the lagging item in the PSSM matrix when lag = 1, expressed as
(A,A,1),(R,R,1),(N,N,1),⋯,(V,V,1) |
The last 20 columns represent the characteristics of the two lags in the PSSM matrix when lag = 2, expressed as
(A,A,2),(R,R,2),(N,N,2),⋯,(V,V,2) |
Feature selection is also known as variable selection or attribute selection, defined as the process of selecting the feature that contributes the most to the predictor variable or output of interest. After extracting features from the sequence, the MRMD2.0 algorithm is used for feature selection.
MRMD2.0 is based on the PageRank algorithm. It is not only a Python-based tool for reducing the dimensionality of data sets but also draws performance curves based on feature dimensions. Accuracy can be sacrificed for fewer features by selecting dimensions from the performance curve [49,50].
The basic idea of classification is to learn the parameters of the classifier through training data, and the goal of classification is to train the parameters of the classifier with the training set with the smallest loss of accuracy [51,52]. Weka (3.8.5) can be used for data mining and prediction models. There are many different classification modes, such as random forest and Bayesian classifiers [27,49,53].
The random forest algorithm is an ensemble algorithm, which is composed of multiple decision tree classifiers, and each subclassifier is a CART classification regression tree; therefore, classification and regression are performed using random forest. This algorithm is highly resistant to overfitting: the risk of overfitting can be reduced by averaging the decision tree [54,55]. The advantages are its simple implementation, high accuracy, fast training speed, strong anti-overfitting ability, and suitability as a benchmark model. The disadvantage is that the model is prone to overfitting on some sample sets with relatively large noise. When there are many decision trees, the training time and space will be relatively large.
Logistic regression is a machine learning method used to solve binary classification problems, and it is a generalized linear regression [56]. A hyperplane can be established to classify samples, which can be described by the following formula:
hθ(x)=g(θTx)=11+e−θTx | (2) |
where X is the sample x=[x1,x2,⋯,xn] is a n-dimensional vector, g is a logistic function, and the general form is g(z)=11+e−z. The advantages are that the computational cost is not high and it is easy to understand and implement, but the disadvantages are that it is easy to underfit and the classification accuracy may not be high.
In this article, k-fold cross-validation was chosen to test predictions. Specificity (SP), sensitivity (SN), and accuracy (ACC) [57,58,59,60,61] were used to evaluate our proposed method [62]; they can be expressed as:
SN=1−NprpoNpr | (3) |
SP=1−NpoprNpo | (4) |
ACC=1−Nprpo+NpoprNpr+Npo | (5) |
where Npr and Npo represent presynaptic neurotoxin and postsynaptic neurotoxin, respectively; Nprpo indicates that the presynaptic neurotoxin was incorrectly predicted as a postsynaptic neurotoxin; and Npopr indicates that a postsynaptic neurotoxin was incorrectly predicted as a presynaptic neurotoxin.
Previous studies have shown that feature extraction is very important for predictor variables, and optimized features can improve the accuracy of model prediction [63,64,65,66,67]. Especially in some high-dimensional data, there may be some noise and redundant information, which will have some negative effects on the prediction.
In this section, the prediction results of monoMonokGap, AC-PSSM, 188D characteristics, Geary-related characteristics, and amino acid composition (AAC) [68,69] characteristics under logistic regression and random forest were compared. The results are shown in Tables 1 and 2 (where the maximum value is indicated in bold). It can be seen from Tables 1 and 2 that the monoMonokGap feature used in this model has the best performance in all indicators. Under the random forest algorithm, when kGap = 7, monoMonokGap (kGap = 7) performs the best; Under the logistic regression algorithm, when kGap = 9, monoMonokGap (kGap = 9) achieves the best result. Compared with the two results, kGap = 9 had the best performance on LR, among which the values of ACC, AUC, SP, and SN were 99.6078%, 0.996, 0.998, and 0.996, respectively. This also proves the effectiveness of monoMonokGap (kGap = 9), so we choose monoMonokGap (kGap = 9) as the feature expression method of the model.
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.910 | 0.870 | 0.966 | 90.9804 |
monoMonokGap (kGap=3) | 0.941 | 0.912 | 0.985 | 94.1176 |
monoMonokGap (kGap=5) | 0.945 | 0.920 | 0.992 | 94.5098 |
monoMonokGap (kGap=7) | 0.957 | 0.936 | 0.991 | 95.6863 |
monoMonokGap (kGap=9) | 0.949 | 0.917 | 0.992 | 94.902 |
AC-PSSM | 0.839 | 0.776 | 0.868 | 83.9216 |
188D | 0.710 | 0.609 | 0.794 | 70.9804 |
Geary | 0.533 | 0.417 | 0.481 | 53.3333 |
AAC | 0.686 | 0.602 | 0.736 | 68.6275 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.929 | 0.901 | 0.971 | 92.9412 |
monoMonokGap (kGap=3) | 0.925 | 0.889 | 0.988 | 92.549 |
monoMonokGap (kGap=5) | 0.984 | 0.986 | 0.994 | 98.4314 |
monoMonokGap (kGap=7) | 0.973 | 0.980 | 0.995 | 97.2549 |
monoMonokGap (kGap=9) | 0.996 | 0.998 | 0.996 | 99.6078 |
AC-PSSM | 0.780 | 0.663 | 0.789 | 78.040 |
188D | 0.631 | 0.37 | 0.603 | 63.1373 |
Geary | 0.627 | 0.428 | 0.684 | 62.7451 |
AAC | 0.667 | 0.460 | 0.637 | 66.6667 |
To further verify the stability of monoMonokGap, we tested it with an independent test set [70] and compared it with AC-PSSM, 188D characteristics, Geary-related characteristics, AAC and other feature expression methods. The results are shown in Tables 3 and 4. Among them, 80% of the randomly selected data sets are used to train the prediction model, and the remaining 20% are used to test the model.
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.824 | 0.828 | 0.931 | 82.3592 |
monoMonokGap (kGap=3) | 0.882 | 0.835 | 0.894 | 88.22353 |
monoMonokGap (kGap=5) | 0.980 | 0.989 | 0.985 | 98.0392 |
monoMonokGap (kGap=7) | 0.961 | 0.979 | 0.998 | 96.0784 |
monoMonokGap (kGap=9) | 0.941 | 0.968 | 0.977 | 94.1176 |
AC-PSSM | 0.686 | 0.677 | 0.709 | 68.6275 |
188D | 0.373 | 0.658 | 0.608 | 37.2549 |
Geary | 0.588 | 0.397 | 0.396 | 58.8235 |
AAC | 0.604 | 0.445 | 0.548 | 60.3774 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.804 | 0.742 | 0.872 | 80.3922 |
monoMonokGap (kGap=3) | 0.804 | 0.716 | 0.923 | 80.3922 |
monoMonokGap (kGap=5) | 0.902 | 0.846 | 0.966 | 90.1961 |
monoMonokGap (kGap=7) | 0.882 | 0.810 | 0.977 | 88.2353 |
monoMonokGap (kGap=9) | 0.922 | 0.881 | 0.973 | 92.1569 |
AC-PSSM | 0.725 | 0.598 | 0.697 | 72.549 |
188D | 0.510 | 0.707 | 0.786 | 50.9804 |
Geary | 0.529 | 0.390 | 0.412 | 52.9412 |
AAC | 0.566 | 0.501 | 0.553 | 56.6038 |
It can be seen from Tables 3 and 4 that, compared with other feature expression methods, the monoMonokGap feature expression method selected in this article has little difference in the results of independent verification set to test and the results of ten-fold cross-validation, which also proves that the feature expression method selected in this paper does not have over-fitting and has good stability.
In this section, the performance of the model constructed in this article is compared with RF, Logical Model Tree (LMT), J48, Bayesnet, NaiveBayes, Sequential Minimal Optimization (SMO), and other classifiers. The results of ten-fold cross-validation are shown in Table 5 (the maximum value is indicated in bold). It can be seen from Table 5 that the results of the model constructed in this paper are significantly better than those of other classifiers in terms of various indicators, with the values of ACC, AUC, SP, and SN being 99.6078%, 0.996, 0.998 and 0.996 respectively. This also proves the validity of the model constructed in this article.
Classifier | SN | SP | AUC | ACC (%) |
Neu_LR | 0.996 | 0.998 | 0.996 | 99.6078 |
RF | 0969 | 0.948 | 0.995 | 96.8627 |
LMT | 0.918 | 0.884 | 0.971 | 91.7647 |
J48 | 0.863 | 0.804 | 0.877 | 86.2745 |
BayesNet | 0.929 | 0.891 | 0.987 | 92.9412 |
NaiveBayes | 0.863 | 0.779 | 0.945 | 86.2745 |
SMO | 0.973 | 0.965 | 0.969 | 97.2549 |
To further verify the robustness of the model constructed in this article, we conducted independent test set verification on it and compared its performance with RF, LMT, J48, Bayesnet, NaiveBayes, SMO, and other classifiers. The results are shown in Table 6. Among them, 80% of the randomly selected data sets are used to train the prediction model, and the remaining 20% are used to test the model.
Classifier | SN | SP | AUC | ACC (%) |
Neu_LR | 0.941 | 0.968 | 0.977 | 94.1176 |
RF | 0.922 | 0.881 | 0.972 | 92.1569 |
LMT | 0.784 | 0.756 | 0.886 | 78.4314 |
J48 | 0.843 | 0.788 | 0.824 | 84.3137 |
BayesNet | 0.902 | 0.846 | 0.948 | 90.1961 |
NaiveBayes | 0.804 | 0.691 | 0.920 | 80.3922 |
SMO | 0.922 | 0.932 | 0.927 | 92.1569 |
It can be seen from Table 6 that, compared to other algorithms, the model constructed in this paper achieves the best prediction article, and the values of ACC, AUC, SP, and SN are 94.1176%, 0.977, 0.968, and 0.941, respectively. Moreover, compared with the results of 10-fold cross-validation, the difference is very small, which also proves that the prediction performance of the model constructed in this article does not exist overfitting and has good stability.
This section compares the model constructed in this paper with other existing methods. The comparison results are shown in Table 7, where the results of ID [23] and ANOVA [7] are obtained directly from literature. It can be seen from Table 7 that the model Neu_LR constructed in this paper has the best performance in all indicators, among which ACC, SP, and SN reach the maximum value of 99.6078%, 0.998, and 0.996 respectively, and the effect is better than the other two methods, which also proves the effectiveness of the model Neu_LR constructed in this paper [71].
Method | SN | SP | ACC (%) |
ID [23] | 88.46 | 91.30 | 89.80 |
ANOVA [7] | 94.51 | 95.15 | 94.92 |
Neu_LR | 0.996 | 0.998 | 99.6078 |
The correct understanding of presynaptic and postsynaptic neurotoxins is an essential first step in the discovery of drug targets and drug design. And protein prediction mainly involves two aspects, feature extraction, and selection of classification algorithms. Therefore, the prediction model Neu_LR was constructed in this article. The monoMonokGap method was used to extract the frequency characteristics of presynaptic and postsynaptic neurotoxin sequences, and the feature selection was carried out. Then, based on the important features obtained after dimension-reduction, the logistic regression algorithm was used to construct the prediction model Neu_LR. In this paper, we use 10-fold cross-validation and independent test set validation to judge whether the Neu_LR model is good or not. In the 10-fold cross-validation, we achieved 99.6078% accuracy, and in the independent test set validation, we achieved 94.1176% accuracy, which shows that our model is feasible and effective.
This work was supported by the National Nature Science Foundation of China (Grant Nos 61863010, 11926205, 11926412, and 61873076), National Key R & D Program of China (No.2020YFB2104400) and Natural Science Foundation of Hainan, China (Grant Nos. 119MS036 and 120RC588), and Hainan Normal University 2020 Graduate Student Innovation Research Project (hsyx2020-41), the Special Science Foundation of Quzhou (2020D003).
All authors declare no conflicts of interest in this paper.
[1] | F. Brauer, C. Castillo-Chavez, Mathematical models in population biology and epidemiology, New York: Springer, 2 (2012). |
[2] |
K. Dietz, J. A. Heesterbeek, Daniel Bernoulli's epidemiological model revisited, Math. Biosci., 180 (2002), 1–21. doi: 10.1016/S0025-5564(02)00122-0
![]() |
[3] | W. O. Kermack, A. G. McKendrick, A contribution to the mathematical theory of epidemics, Proc. R. Soc. Lond. A Math. Phys. Sci., 115 (1927), 700–721. |
[4] | H. W. Hethcote, The mathematics of infectious diseases, SIAM Rev. Soc. Ind. Appl. Math., 42 (2000), 599–653. |
[5] | R. M. Anderson, B. Anderson, R. M. May, Infectious diseases of humans: dynamics and control, Oxford University Press, 1992. |
[6] |
S. Ruan, W. Wang, Dynamical behavior of an epidemic model with a nonlinear incidence rate, J. Differ. Equ., 188 (2003), 135–163. doi: 10.1016/S0022-0396(02)00089-X
![]() |
[7] |
J. P. Tripathi, S. Abbas, Global dynamics of autonomous and nonautonomous SI epidemic models with nonlinear incidence rate and feedback controls, Nonlinear Dyn, 86 (2016), 337–351. doi: 10.1007/s11071-016-2892-0
![]() |
[8] | L. J. Gross, Mathematical models in plant biology: An overview, Applied Mathematical Ecology, (1989), 385–407. |
[9] |
W. M. Liu, H. W. Hethcote, S. A. Levin, Dynamical behavior of epidemiological models with nonlinear incidence rates, J. Math. Biol., 25 (1987), 359–380. doi: 10.1007/BF00277162
![]() |
[10] |
Y. Cai, Y. Kang, M. Banerjee, W. Wang, A stochastic SIRS epidemic model with infectious force under intervention strategies, J. Differ. Equ., 259 (2015), 7463–7502. doi: 10.1016/j.jde.2015.08.024
![]() |
[11] |
M. Zhou, T. Zhang, Global Analysis of an SEIR Epidemic Model with Infectious Force under Intervention Strategies, J. Appl. Math. Phys., 7 (2019), 1706. doi: 10.4236/jamp.2019.78117
![]() |
[12] |
J. Cui, Y. Sun, H. Zhu, The impact of media on the control of infectious diseases, J. Dyn. Differ. Equ., 20 (2008), 31–53. doi: 10.1007/s10884-007-9075-0
![]() |
[13] | J.A. Cui, X. Tao, H. Zhu, An SIS infection model incorporating media coverage, Rocky Mt. J. Math., (2008), 1323–1334. |
[14] |
D. Xiao, S. Ruan, Global analysis of an epidemic model with nonmonotone incidence rate, Math. Biosci., 208 (2007), 419–429. doi: 10.1016/j.mbs.2006.09.025
![]() |
[15] |
Y. Xiao, T. Zhao, S. Tang, Dynamics of an infectious diseases with media/psychology induced non-smooth incidence, Math. Biosci. Eng., 10 (2013), 445. doi: 10.3934/mbe.2013.10.445
![]() |
[16] |
S. Tang, Y. Xiao, L. Yuan, R. A. Cheke, J. Wu, Campus quarantine (Fengxiao) for curbing emergent infectious diseases: Lessons from mitigating A/H1N1 in Xi'an, China, J. Theor. Biol., 295 (2012), 47–58. doi: 10.1016/j.jtbi.2011.10.035
![]() |
[17] |
Y. Xiao, S. Tang, J. Wu, Media impact switching surface during an infectious disease outbreak, Sci. Rep., 5 (2015), 7838. doi: 10.1038/srep07838
![]() |
[18] |
A. B. Gumel, S. Ruan, T. Day, J. Watmough, F. Brauer, P. Van den Driessche, et al., Modelling strategies for controlling SARS outbreaks, Proc. R. Soc. Lond. B Biol. Sci., 271 (2004), 2223–2232. doi: 10.1098/rspb.2004.2800
![]() |
[19] | J. Zhang, J. Lou, Z. Ma, J. Wu, A compartmental model for the analysis of SARS transmission patterns and outbreak control measures in China, Appl. Math. Comput., 162 (2005), 909–924. |
[20] |
S. Bugalia, V. P. Bajiya, J. P. Tripathi, M. T. Li, G. Q. Sun, Mathematical modeling of COVID-19 transmission: The roles of intervention strategies and lockdown, Math. Biosci. Eng., 17 (2020), 5961–5986. doi: 10.3934/mbe.2020318
![]() |
[21] |
V. P. Bajiya, S. Bugalia, J. P. Tripathi, Mathematical modeling of COVID-19: Impact of non-pharmaceutical interventions in India, Chaos Interdiscipl. J. Nonlinear Sci., 30 (2020), 113143. doi: 10.1063/5.0021353
![]() |
[22] |
M. T. Li, G. Q. Sun, J. Zhang, Y. Zhao, X. Pei, L. Li, et al., Analysis of COVID-19 transmission in Shanxi Province with discrete time imported cases, Math. Biosci. Eng., 17 (2020), 3710. doi: 10.3934/mbe.2020208
![]() |
[23] |
C. N. Ngonghala, E. Iboi, S. Eikenberry, M. Scotch, C. R. MacIntyre, M. H. Bonds, et al., Mathematical assessment of the impact of non-pharmaceutical interventions on curtailing the 2019 novel Coronavirus, Math. Biosci., 325 (2020), 108364. doi: 10.1016/j.mbs.2020.108364
![]() |
[24] | S. Batabyal, A. Batabyal, Mathematical computations on epidemiology: A case study of the novel coronavirus (SARS-CoV-2), Theory Biosci., (2021), 1–16. |
[25] |
S. Batabyal, COVID-19: Perturbation dynamics resulting chaos to stable with seasonality transmission, Chaos Solitons Fractals, 145 (2021), 110772. doi: 10.1016/j.chaos.2021.110772
![]() |
[26] |
W. Wang, S. Ruan, Bifurcations in an epidemic model with constant removal rate of the infectives, J. Math. Anal. Appl., 291 (2004), 775–793. doi: 10.1016/j.jmaa.2003.11.043
![]() |
[27] |
W. Wang, Epidemic models with nonlinear infection forces, Math. Biosci. Eng., 3 (2006), 267. doi: 10.3934/mbe.2006.3.267
![]() |
[28] |
C. Shan, H. Zhu, Bifurcations and complex dynamics of an SIR model with the impact of the number of hospital beds, J. Differ. Equ., 257 (2014), 1662–1688. doi: 10.1016/j.jde.2014.05.030
![]() |
[29] |
G. H. Li, Y. X. Zhang, Dynamic behaviors of a modified SIR model in epidemic diseases using nonlinear incidence and recovery rates, PLoS One, 12 (2017), e0175789. doi: 10.1371/journal.pone.0175789
![]() |
[30] |
R. Mu, A. Wei, Y. Yang, Global dynamics and sliding motion in A (H7N9) epidemic models with limited resources and Filippov control, J. Math. Anal. Appl., 477 (2019), 1296–1317. doi: 10.1016/j.jmaa.2019.05.013
![]() |
[31] |
H. Zhao, L. Wang, S. M. Oliva, H. Zhu, Modeling and Dynamics Analysis of Zika Transmission with Limited Medical Resources, Bull. Math. Biol., 82 (2020), 1–50. doi: 10.1007/s11538-019-00680-3
![]() |
[32] | A. Sirijampa, S. Chinviriyasit, W. Chinviriyasit, Hopf bifurcation analysis of a delayed SEIR epidemic model with infectious force in latent and infected period, Adv. Differ. Equ., 1 (2018), 348. |
[33] |
E. Beretta, D. Breda, An SEIR epidemic model with constant latency time and infectious period, Math. Biosci. Eng., 8 (2011), 931. doi: 10.3934/mbe.2011.8.931
![]() |
[34] |
Z. Zhang, S. Kundu, J. P. Tripathi, S. Bugalia, Stability and Hopf bifurcation analysis of an SVEIR epidemic model with vaccination and multiple time delays, Chaos Solitons Fractals, 131 (2020), 109483. doi: 10.1016/j.chaos.2019.109483
![]() |
[35] |
P. Song, Y. Xiao, Global hopf bifurcation of a delayed equation describing the lag effect of media impact on the spread of infectious disease, J. Math. Biol., 76 (2018), 1249–1267. doi: 10.1007/s00285-017-1173-y
![]() |
[36] |
A. K. Misra, A. Sharma, V. Singh, Effect of awareness programs in controlling the prevalence of an epidemic with time delay, J. Biol. Sys., 19 (2011), 389–402. doi: 10.1142/S0218339011004020
![]() |
[37] |
F. Al Basir, S. Adhurya, M. Banerjee, E. Venturino, S. Ray, Modelling the Effect of Incubation and Latent Periods on the Dynamics of Vector-Borne Plant Viral Diseases, Bull. Math. Biol., 82 (2020), 1–22. doi: 10.1007/s11538-019-00680-3
![]() |
[38] |
S. Liao, W. Yang, Cholera model incorporating media coverage with multiple delays, Math. Methods Appl. Sci., 42 (2019), 419–439. doi: 10.1002/mma.5175
![]() |
[39] |
K. A. Pawelek, S. Liu, F. Pahlevani, L. Rong, A model of HIV-1 infection with two time delays: mathematical analysis and comparison with patient data, Math. Biosci., 235 (2012), 98–109. doi: 10.1016/j.mbs.2011.11.002
![]() |
[40] | D. Greenhalgh, S. Rana, S. Samanta, T. Sardar, S. Bhattacharya, J. Chattopadhyay, Awareness programs control infectious disease–multiple delay induced mathematical model, Appl. Math. Comput., 251 (2015), 539–563. |
[41] |
H. Zhao, M. Zhao, Global Hopf bifurcation analysis of an susceptible-infective-removed epidemic model incorporating media coverage with time delay, J. Biol. Dyn., 11 (2017), 8–24. doi: 10.1080/17513758.2016.1229050
![]() |
[42] | F. Al Basir, S. Ray, E. Venturino, Role of media coverage and delay in controlling infectious diseases: A mathematical model, Appl. Math. Comput., 337 (2018), 372–385. |
[43] |
J. Liu, Bifurcation analysis for a delayed SEIR epidemic model with saturated incidence and saturated treatment function, J. Biol. Dyn., 13 (2019), 461–480. doi: 10.1080/17513758.2019.1631965
![]() |
[44] |
G. H. Li, Y. X. Zhang, Dynamic behaviors of a modified SIR model in epidemic diseases using nonlinear incidence and recovery rates, PLoS One, 12 (2017), e0175789. doi: 10.1371/journal.pone.0175789
![]() |
[45] | D. P. Gao, N. J. Huang, S. M. Kang, C. Zhang, Global stability analysis of an SVEIR epidemic model with general incidence rate, Bound. Value Probl., (2018), 42. |
[46] |
G. Huang, Y. Takeuchi, W. Ma, D. Wei, Global stability for delay SIR and SEIR epidemic models with nonlinear incidence rate, Bull. Math. Biol., 72 (2010), 1192–1207. doi: 10.1007/s11538-009-9487-6
![]() |
[47] |
E. Beretta, Y. Kuang, Geometric stability switch criteria in delay differential systems with delay dependent parameters, SIAM J. Math. Anal., 33 (2002), 1144–1165. doi: 10.1137/S0036141000376086
![]() |
[48] |
Q. An, E. Beretta, Y. Kuang, C. Wang, H. Wang, Geometric stability switch criteria in delay differential equations with two delays and delay dependent parameters, J. Differ. Equ., 266 (2019), 7073–7100. doi: 10.1016/j.jde.2018.11.025
![]() |
[49] | Y. Kuang, editor, Delay differential equations: with applications in population dynamics, Academic Press, 1993. |
[50] |
X. Yang, Generalized form of Hurwitz-Routh criterion and Hopf bifurcation of higher order, Appl. Math. Lett., 15 (2002), 615–621. doi: 10.1016/S0893-9659(02)80014-3
![]() |
[51] |
C. Castillo-Chavez, B. Song, Dynamical models of tuberculosis and their applications, Math. Biosci. Eng., 1 (2004), 361. doi: 10.3934/mbe.2004.1.361
![]() |
[52] | C. Castillo-Chavez, Z. Feng, W. Huang, On the computation of R0 and its role on global stability, IMA Volumes in Mathematics and Its Applications, 1 (2002), 229. |
[53] | Y. Y. Min, Y. G. Liu, Barbalat lemma and its application in analysis of system stability, Journal of Shandong University (engineering science), 37 (2007), 51–55. |
[54] | S. Ruan, J. Wei, On the zeros of transcendental functions with applications to stability of delay differential equations with two delays, Dynamics of Continuous Discrete and Impulsive Systems Series A, 10 (2003), 863–874. |
[55] |
H. L. Freedman, V. S. Rao, The trade-off between mutual interference and time lags in predator-prey systems, Bull. Math. Biol., 45 (1983), 991–1004. doi: 10.1016/S0092-8240(83)80073-1
![]() |
[56] |
L. H. Erbe, H. I. Freedman, V. S. Rao, Three-species food-chain models with mutual interference and time delays, Math. Biosci., 80 (1986), 57–80. doi: 10.1016/0025-5564(86)90067-2
![]() |
[57] | B. D. Hassard, N. D. Kazarinoff, Y. H. Wan, Y. W. Wan, Theory and applications of Hopf bifurcation, CUP Archive, 41 (1981). |
[58] | J. J. Benedetto, W. Czaja, Riesz Representation Theorem, In Integration and Modern Analysis, Birkhäuser Boston, (2009), 321–357. |
[59] |
J. Wu, Symmetric functional differential equations and neural networks with memory, Trans. Am. Math. Soc., 350 (1998), 4799–4838. doi: 10.1090/S0002-9947-98-02083-2
![]() |
[60] |
H. I. Freedman, S. Ruan, M. Tang, Uniform persistence and flows near a closed positively invariant set, J. Dyn. Differ. Equ., 6 (1994), 583–600. doi: 10.1007/BF02218848
![]() |
[61] | Worldometer, coronavirus, available from: https://www.worldometers.info/coronavirus/. |
[62] | The print news, available from: https://theprint.in/world/this-is-how-france\-italy-and-spain-are-easing-their-lockdowns-one-step-at-a-time/413060/. |
[63] | World Health Organization, COVID-19. Available from: https://covid19.who.int/. |
[64] | X. Lin, H. Wang, Stability analysis of delay differential equations with two discrete delays, Canadian Appl. Math. Quart., 20 (2012), 519–533. |
[65] |
L. Wang, J. Wang, H. Zhao, Y. Shi, K. Wang, P. Wu, et al., Modelling and assessing the effects of medical resources on transmission of novel coronavirus (COVID-19) in Wuhan, China, Math. Biosci. Eng., 17 (2020), 2936–2949. doi: 10.3934/mbe.2020165
![]() |
1. | Hao Wan, Qing Liu, Ying Ju, Utilize a few features to classify presynaptic and postsynaptic neurotoxins, 2023, 152, 00104825, 106380, 10.1016/j.compbiomed.2022.106380 | |
2. | Muhammad Nabeel Asim, Muhammad Ali Ibrahim, Ahtisham Fazeel, Andreas Dengel, Sheraz Ahmed, DNA-MP: a generalized DNA modifications predictor for multiple species based on powerful sequence encoding method, 2023, 24, 1467-5463, 10.1093/bib/bbac546 | |
3. | Hongwei Zhang, Yan Shi, Yapeng Wang, Xu Yang, Kefeng Li, Sio-Kei Im, Yu Han, LMFE: A Novel Method for Predicting Plant LncRNA Based on Multi-Feature Fusion and Ensemble Learning, 2025, 16, 2073-4425, 424, 10.3390/genes16040424 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.910 | 0.870 | 0.966 | 90.9804 |
monoMonokGap (kGap=3) | 0.941 | 0.912 | 0.985 | 94.1176 |
monoMonokGap (kGap=5) | 0.945 | 0.920 | 0.992 | 94.5098 |
monoMonokGap (kGap=7) | 0.957 | 0.936 | 0.991 | 95.6863 |
monoMonokGap (kGap=9) | 0.949 | 0.917 | 0.992 | 94.902 |
AC-PSSM | 0.839 | 0.776 | 0.868 | 83.9216 |
188D | 0.710 | 0.609 | 0.794 | 70.9804 |
Geary | 0.533 | 0.417 | 0.481 | 53.3333 |
AAC | 0.686 | 0.602 | 0.736 | 68.6275 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.929 | 0.901 | 0.971 | 92.9412 |
monoMonokGap (kGap=3) | 0.925 | 0.889 | 0.988 | 92.549 |
monoMonokGap (kGap=5) | 0.984 | 0.986 | 0.994 | 98.4314 |
monoMonokGap (kGap=7) | 0.973 | 0.980 | 0.995 | 97.2549 |
monoMonokGap (kGap=9) | 0.996 | 0.998 | 0.996 | 99.6078 |
AC-PSSM | 0.780 | 0.663 | 0.789 | 78.040 |
188D | 0.631 | 0.37 | 0.603 | 63.1373 |
Geary | 0.627 | 0.428 | 0.684 | 62.7451 |
AAC | 0.667 | 0.460 | 0.637 | 66.6667 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.824 | 0.828 | 0.931 | 82.3592 |
monoMonokGap (kGap=3) | 0.882 | 0.835 | 0.894 | 88.22353 |
monoMonokGap (kGap=5) | 0.980 | 0.989 | 0.985 | 98.0392 |
monoMonokGap (kGap=7) | 0.961 | 0.979 | 0.998 | 96.0784 |
monoMonokGap (kGap=9) | 0.941 | 0.968 | 0.977 | 94.1176 |
AC-PSSM | 0.686 | 0.677 | 0.709 | 68.6275 |
188D | 0.373 | 0.658 | 0.608 | 37.2549 |
Geary | 0.588 | 0.397 | 0.396 | 58.8235 |
AAC | 0.604 | 0.445 | 0.548 | 60.3774 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.804 | 0.742 | 0.872 | 80.3922 |
monoMonokGap (kGap=3) | 0.804 | 0.716 | 0.923 | 80.3922 |
monoMonokGap (kGap=5) | 0.902 | 0.846 | 0.966 | 90.1961 |
monoMonokGap (kGap=7) | 0.882 | 0.810 | 0.977 | 88.2353 |
monoMonokGap (kGap=9) | 0.922 | 0.881 | 0.973 | 92.1569 |
AC-PSSM | 0.725 | 0.598 | 0.697 | 72.549 |
188D | 0.510 | 0.707 | 0.786 | 50.9804 |
Geary | 0.529 | 0.390 | 0.412 | 52.9412 |
AAC | 0.566 | 0.501 | 0.553 | 56.6038 |
Classifier | SN | SP | AUC | ACC (%) |
Neu_LR | 0.996 | 0.998 | 0.996 | 99.6078 |
RF | 0969 | 0.948 | 0.995 | 96.8627 |
LMT | 0.918 | 0.884 | 0.971 | 91.7647 |
J48 | 0.863 | 0.804 | 0.877 | 86.2745 |
BayesNet | 0.929 | 0.891 | 0.987 | 92.9412 |
NaiveBayes | 0.863 | 0.779 | 0.945 | 86.2745 |
SMO | 0.973 | 0.965 | 0.969 | 97.2549 |
Classifier | SN | SP | AUC | ACC (%) |
Neu_LR | 0.941 | 0.968 | 0.977 | 94.1176 |
RF | 0.922 | 0.881 | 0.972 | 92.1569 |
LMT | 0.784 | 0.756 | 0.886 | 78.4314 |
J48 | 0.843 | 0.788 | 0.824 | 84.3137 |
BayesNet | 0.902 | 0.846 | 0.948 | 90.1961 |
NaiveBayes | 0.804 | 0.691 | 0.920 | 80.3922 |
SMO | 0.922 | 0.932 | 0.927 | 92.1569 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.910 | 0.870 | 0.966 | 90.9804 |
monoMonokGap (kGap=3) | 0.941 | 0.912 | 0.985 | 94.1176 |
monoMonokGap (kGap=5) | 0.945 | 0.920 | 0.992 | 94.5098 |
monoMonokGap (kGap=7) | 0.957 | 0.936 | 0.991 | 95.6863 |
monoMonokGap (kGap=9) | 0.949 | 0.917 | 0.992 | 94.902 |
AC-PSSM | 0.839 | 0.776 | 0.868 | 83.9216 |
188D | 0.710 | 0.609 | 0.794 | 70.9804 |
Geary | 0.533 | 0.417 | 0.481 | 53.3333 |
AAC | 0.686 | 0.602 | 0.736 | 68.6275 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.929 | 0.901 | 0.971 | 92.9412 |
monoMonokGap (kGap=3) | 0.925 | 0.889 | 0.988 | 92.549 |
monoMonokGap (kGap=5) | 0.984 | 0.986 | 0.994 | 98.4314 |
monoMonokGap (kGap=7) | 0.973 | 0.980 | 0.995 | 97.2549 |
monoMonokGap (kGap=9) | 0.996 | 0.998 | 0.996 | 99.6078 |
AC-PSSM | 0.780 | 0.663 | 0.789 | 78.040 |
188D | 0.631 | 0.37 | 0.603 | 63.1373 |
Geary | 0.627 | 0.428 | 0.684 | 62.7451 |
AAC | 0.667 | 0.460 | 0.637 | 66.6667 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.824 | 0.828 | 0.931 | 82.3592 |
monoMonokGap (kGap=3) | 0.882 | 0.835 | 0.894 | 88.22353 |
monoMonokGap (kGap=5) | 0.980 | 0.989 | 0.985 | 98.0392 |
monoMonokGap (kGap=7) | 0.961 | 0.979 | 0.998 | 96.0784 |
monoMonokGap (kGap=9) | 0.941 | 0.968 | 0.977 | 94.1176 |
AC-PSSM | 0.686 | 0.677 | 0.709 | 68.6275 |
188D | 0.373 | 0.658 | 0.608 | 37.2549 |
Geary | 0.588 | 0.397 | 0.396 | 58.8235 |
AAC | 0.604 | 0.445 | 0.548 | 60.3774 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.804 | 0.742 | 0.872 | 80.3922 |
monoMonokGap (kGap=3) | 0.804 | 0.716 | 0.923 | 80.3922 |
monoMonokGap (kGap=5) | 0.902 | 0.846 | 0.966 | 90.1961 |
monoMonokGap (kGap=7) | 0.882 | 0.810 | 0.977 | 88.2353 |
monoMonokGap (kGap=9) | 0.922 | 0.881 | 0.973 | 92.1569 |
AC-PSSM | 0.725 | 0.598 | 0.697 | 72.549 |
188D | 0.510 | 0.707 | 0.786 | 50.9804 |
Geary | 0.529 | 0.390 | 0.412 | 52.9412 |
AAC | 0.566 | 0.501 | 0.553 | 56.6038 |
Classifier | SN | SP | AUC | ACC (%) |
Neu_LR | 0.996 | 0.998 | 0.996 | 99.6078 |
RF | 0969 | 0.948 | 0.995 | 96.8627 |
LMT | 0.918 | 0.884 | 0.971 | 91.7647 |
J48 | 0.863 | 0.804 | 0.877 | 86.2745 |
BayesNet | 0.929 | 0.891 | 0.987 | 92.9412 |
NaiveBayes | 0.863 | 0.779 | 0.945 | 86.2745 |
SMO | 0.973 | 0.965 | 0.969 | 97.2549 |
Classifier | SN | SP | AUC | ACC (%) |
Neu_LR | 0.941 | 0.968 | 0.977 | 94.1176 |
RF | 0.922 | 0.881 | 0.972 | 92.1569 |
LMT | 0.784 | 0.756 | 0.886 | 78.4314 |
J48 | 0.843 | 0.788 | 0.824 | 84.3137 |
BayesNet | 0.902 | 0.846 | 0.948 | 90.1961 |
NaiveBayes | 0.804 | 0.691 | 0.920 | 80.3922 |
SMO | 0.922 | 0.932 | 0.927 | 92.1569 |
Method | SN | SP | ACC (%) |
ID [23] | 88.46 | 91.30 | 89.80 |
ANOVA [7] | 94.51 | 95.15 | 94.92 |
Neu_LR | 0.996 | 0.998 | 99.6078 |