
The objective of this study was to assess the rate of effectiveness of cervical cancer awareness outreach among Vietnamese women in San Diego, USA. In collaboration with different community partners, educational seminars were hosted by student pharmacists in the Vietnamese community. We hypothesized that the seminars would increase cervical cancer awareness and encourage a positive outlook on obtaining annual Pap smears and HPV vaccines. The study design included pre- and post-intervention assessment surveys in either Vietnamese or English language. The surveys were administered to Vietnamese women who participated in the seminars. Eight seminars were hosted at local health fairs in San Diego. A total of 120 Vietnamese women participated in the seminars. Our study showed that educational seminars significantly improved the knowledge about cervical cancer, Pap smears and HPV vaccines. By comparing the pre- and post-intervention surveys, we observed an improvement in knowledge about cervical cancer (61% vs 93%, p < 0.001) and a positive change in the attitude towards obtaining a Pap smear within a year following the educational intervention (57% vs. 78%, p < 0.002). Therefore, we concluded that the educational health outreach seminars presented by student pharmacists are an effective educational model to help improve knowledge about cervical cancer and prevention among Vietnamese women.
Citation: Eduardo Fricovsky, Mudassar Iqbal Arain, Binh Tran, Phuong Thao Nguyen, Tuyet Phan, Natalie Chang. Assessing the impact of a health education outreach project on cervical cancer awareness among Vietnamese-American women in San Diego[J]. AIMS Public Health, 2022, 9(3): 552-558. doi: 10.3934/publichealth.2022038
[1] | Ansheng Ye, Xiangbing Zhou, Kai Weng, Yu Gong, Fang Miao, Huimin Zhao . Image classification of hyperspectral remote sensing using semi-supervised learning algorithm. Mathematical Biosciences and Engineering, 2023, 20(6): 11502-11527. doi: 10.3934/mbe.2023510 |
[2] | Wenjun Xu, Zihao Zhao, Hongwei Zhang, Minglei Hu, Ning Yang, Hui Wang, Chao Wang, Jun Jiao, Lichuan Gu . Deep neural learning based protein function prediction. Mathematical Biosciences and Engineering, 2022, 19(3): 2471-2488. doi: 10.3934/mbe.2022114 |
[3] | Lili Jiang, Sirong Chen, Yuanhui Wu, Da Zhou, Lihua Duan . Prediction of coronary heart disease in gout patients using machine learning models. Mathematical Biosciences and Engineering, 2023, 20(3): 4574-4591. doi: 10.3934/mbe.2023212 |
[4] | Shun Li, Lu Yuan, Yuming Ma, Yihui Liu . WG-ICRN: Protein 8-state secondary structure prediction based on Wasserstein generative adversarial networks and residual networks with Inception modules. Mathematical Biosciences and Engineering, 2023, 20(5): 7721-7737. doi: 10.3934/mbe.2023333 |
[5] | Xiaoman Zhao, Xue Wang, Zhou Jin, Rujing Wang . A normalized differential sequence feature encoding method based on amino acid sequences. Mathematical Biosciences and Engineering, 2023, 20(8): 14734-14755. doi: 10.3934/mbe.2023659 |
[6] | Mingju Chen, Zhongxiao Lan, Zhengxu Duan, Sihang Yi, Qin Su . HDS-YOLOv5: An improved safety harness hook detection algorithm based on YOLOv5s. Mathematical Biosciences and Engineering, 2023, 20(8): 15476-15495. doi: 10.3934/mbe.2023691 |
[7] | Jingren Niu, Qing Tan, Xiufen Zou, Suoqin Jin . Accurate prediction of glioma grades from radiomics using a multi-filter and multi-objective-based method. Mathematical Biosciences and Engineering, 2023, 20(2): 2890-2907. doi: 10.3934/mbe.2023136 |
[8] | Natalya Shakhovska, Vitaliy Yakovyna, Valentyna Chopyak . A new hybrid ensemble machine-learning model for severity risk assessment and post-COVID prediction system. Mathematical Biosciences and Engineering, 2022, 19(6): 6102-6123. doi: 10.3934/mbe.2022285 |
[9] | Lu Yuan, Yuming Ma, Yihui Liu . Protein secondary structure prediction based on Wasserstein generative adversarial networks and temporal convolutional networks with convolutional block attention modules. Mathematical Biosciences and Engineering, 2023, 20(2): 2203-2218. doi: 10.3934/mbe.2023102 |
[10] | Hong Yuan, Jing Huang, Jin Li . Protein-ligand binding affinity prediction model based on graph attention network. Mathematical Biosciences and Engineering, 2021, 18(6): 9148-9162. doi: 10.3934/mbe.2021451 |
The objective of this study was to assess the rate of effectiveness of cervical cancer awareness outreach among Vietnamese women in San Diego, USA. In collaboration with different community partners, educational seminars were hosted by student pharmacists in the Vietnamese community. We hypothesized that the seminars would increase cervical cancer awareness and encourage a positive outlook on obtaining annual Pap smears and HPV vaccines. The study design included pre- and post-intervention assessment surveys in either Vietnamese or English language. The surveys were administered to Vietnamese women who participated in the seminars. Eight seminars were hosted at local health fairs in San Diego. A total of 120 Vietnamese women participated in the seminars. Our study showed that educational seminars significantly improved the knowledge about cervical cancer, Pap smears and HPV vaccines. By comparing the pre- and post-intervention surveys, we observed an improvement in knowledge about cervical cancer (61% vs 93%, p < 0.001) and a positive change in the attitude towards obtaining a Pap smear within a year following the educational intervention (57% vs. 78%, p < 0.002). Therefore, we concluded that the educational health outreach seminars presented by student pharmacists are an effective educational model to help improve knowledge about cervical cancer and prevention among Vietnamese women.
Neurotoxins are toxic substances that are destructive to nerve tissue, such as AF64A, 6-hydroxydopamine, and kainic acid. In principle, the toxins act on the ion channels in the nerve-muscle junction, destroying cholinergic neurons, inhibiting the release of acetylcholine, blocking nerve-muscle conduction, causing muscle weakness, and thus making the muscle unable to contract. In severe cases, these toxins can cause suffocation and death. Neurotoxins can be classified into presynaptic and postsynaptic types according to their mechanism of action [1]. Presynaptic neurotoxins mainly act on the presynaptic membrane [2]; due to the specificity of enzyme activity, they typically block neuromuscular transmission and inhibit the release of neurotransmitters. The targets of postsynaptic neurotoxins are located in the postsynaptic membrane and can bind to acetylcholine receptors [3]. For example, β-methylamino-L-alanine, also known as BMAA, can damage motor neurons and has been implicated in Parkinson's syndrome. Cobra neurotoxin is a short-chain neurotoxin, the most important lethal component in cobra venom, a mainly postsynaptic neurotoxin. Because cobra venom neurotoxin is nonaddictive and non-drug resistant, it has broad prospects for persons with a drug addiction in detoxification. Therefore, the study of presynaptic and postsynaptic neurotoxins will contribute to the development of medicine, for example, to provide important clues for drug design [4,5,6].
Neurotoxins are a type of protein, and while their structure and function can be correctly predicted through biochemical experiments, the work is time-consuming and expensive [7,8,9,10]. In the genome era, many biological sequences are available [11], giving us a variety of methods to predict protein structure and function [12,13,14,15]. The key to correctly predicting protein structure and function is how to analyze these features using computational methods. Therefore, we can use machine learning methods for protein type prediction [16]. Generally, the use of machine learning to predict biological sequences mainly includes the following steps: feature extraction, model construction, and performance evaluation [17,18,19,20,21,22]. In 2009, a diversity-based method of identifying presynaptic and postsynaptic neurotoxins was proposed. The algorithm is based on the composition of amino acids and pseudo-amino acids [23]. To further improve the accuracy of prediction, Hua Tang et al. proposed a new feature selection technique based on the principle of variance analysis (ANOVA) [7,24]. In this article, we constructed a predictive model, Neu_LR, to correctly identify presynaptic and postsynaptic neurotoxins. The monoMonokGap method was used to extract the frequency characteristics of presynaptic and postsynaptic neurotoxin sequences and carry out feature selection. Then, based on the important features obtained after dimensionality reduction, the logistic regression algorithm is used to construct the prediction model Neu_LR.
As the effectiveness of machine learning technology has been continuously verified in recent years, the prediction of protein classification using machine learning-related technology has become a new research category [25,26]. The key to protein classification prediction with the help of machine learning technology lies in data processing and classification algorithms. The general prediction process is to first use the algorithm to extract the features of the protein and then use different classifiers to predict the protein. Therefore, the effective combination of feature extraction algorithms and classifiers has been extensively studied [27,28,29,30,31].
The research contents of this study include. We first download the presynaptic and postsynaptic neurotoxins from the UniProt database, and then the monoMonokGap feature expression algorithm is used to extract and select features from the data set to get the optimal features. Second, the feature vector obtained by dimensionality reduction was taken as the input, and the model was built using a logistic regression algorithm, and ten-fold cross-validation and independent test set validation was carried out. Figure 1 shows the flow chart of building the model in this paper [32,33].
High-quality data sets are the basis for building reliable and accurate models [34,35]. The UniProt database provides the scientific community with a single, centralized, authoritative source of protein sequence and functional information [36]. The data set used in this article is also applied to the research of Hua Tang et al. A total of 91 presynaptic and 165 postsynaptic neurotoxins were downloaded from the UniProt database. Since fuzzy information will reduce the quality of the benchmark data set and will cause the predicted model to become unreliable, we must eliminate unknown residues in the protein sequence (such as "X", "Z", "J", "O" and "B"). Because of the highly similar protein sequences in the data set, the results can be overestimated; therefore, the cut-off value of sequence identity is set to 80%. According to the results of the above screening, our data set contains 90 presynaptic neurotoxins, 165 postsynaptic neurotoxins and a total of 255 types of neurotoxin samples can be expressed by the following formula:
S=Spr∪Spo |
where subset Spr is a collection of 90 presynaptic neurotoxins and Spo is a collection of 165 postsynaptic neurotoxins.
Each protein sequence can be expressed by the following formula:
R=r1r2r3⋯rL |
R stands for protein sequence, ri stands for representative residue, and L is the length of the protein sequence. Since some machine learning methods cannot directly learn R, the protein sequences have to be converted to fixed-length vectors [37].
As the first step in building a biological sequence analysis model, feature extraction is an important part of the correct prediction of protein sequences. Generally, we can use the feature extraction method to convert the input neurotoxin sample order into a fixed-length digital vector and then use MRMD2.0 to reduce the dimensionality of the obtained feature vector as needed. Finally, we use the reduced dimensionality vector as the input vector of the classifier model and classify and process the result [38].
The monoMonoKGap is a feature extraction method, and the best features can be generated from a large number of previously generated features, monoMonoKGap considers kGap in the nucleotide sub-sequence, frequencies of these sub-sequences are treated as prediction features. It can be used for feature extraction of DNA sequences, RNA sequences, and protein sequences [39]. The selection range of kGap in DNA sequences and RNA sequences is 1 to 5, and the selection range of kGap in protein sequences is 1 to 10. When the kGap value is small, the formation of the feature set is small, and the frequency of occurrence of these features retains the partial or short information sequence order, and when the kGap value is moderately large, more features will be generated to retain long sequence information [40]. Specifically, when kGap = 1, the sequence can be encoded as frequencies of X_X, Simultaneously generate 4×1×4 dimensional features, or 20×1×20 dimensional features, when kGap = 2, generate 4×2×4 dimensional features, or 20×2×20 dimensional features and so on [41]. That is when kGap = n, the DNA sequence and RNA sequence will produce 4×4×n features, and the protein sequence will produce 20×20×n features. The generated feature format is as follows:
when kGap = 1, the characteristic structure is X_X;
when kGap = 2, the characteristic structure is X_X and X__X; and so on.
where X is defined as:
X={{A,C,G,T},{A,C,G,U}{A,C,D,E,F,G,H,I,K,L,M,N,P,Q,R,S,T,V,W,Y} |
This is followed by the DNA sequence, RNA sequence, and protein sequence.
For feature selection, to reduce the negative impact of dimensionality and to maintain information features, the AdaBoost classification model can be used to calculate the average impurity reduction. AdaBoost is a popular boosting classification algorithm in data mining. The core idea is to train multiple weak classifiers on the same training set, set these weak classifiers, and finally build a strong classifier [42]. Since Freund and Schapire proposed the AdaBoost algorithm [43,44,45], the improvement of AdaBoost has mainly involved two aspects: 1) adjusting the weight of the weak classifier in a new way and 2) improving the training method to reduce the error rate of classifier or save the training time.
Each residue in the protein sequence has many physical and chemical properties, so the protein sequence can be regarded as a time series with corresponding properties [46]. PSI-BIAST is run to compare the data set and parameters to generate the outline of each sequence.
AC-PSSM [47] uses PSI-BIAST [48] as a search tool. PSI-BIAST provides a means to detect distant relationships between proteins. It is a protein sequence profile search method used to compare more sensitive protein sequences to other protein sequences. The database used is the nr library.
AC-PSSM can convert PSSMs of different lengths into vectors of fixed length. AC measures the correlation between two residues with the same property, which can be expressed as:
{AC(i,lag)=L−lag∑j=1(Si,j−ˉSi)(Si,j+lag−ˉSi)/(L−lag)ˉSi=L∑j=1Si,j/L | (1) |
where i is one of the residues, L is the length of the protein sequence, Si,j is the PSSM fraction of amino acid i at the jth position, and ˉSi is the average fraction of amino acid i in the entire protein sequence. In this way, the number of ACs can be calculated by 20×LAG, where LAG is the maximum hysteresis, where the value of lag is all integers from 1 to LAG. In this article, we set the value of LAG to the default value of 2, that is, the maximum hysteresis is 2. Figure 2 shows the flowchart for generating AC-PSSM when the hysteresis is 2. That is, when LAG = 2, AC-PSSM can generate 40 columns of feature vectors. The first 20 columns represent the characteristics of the lagging item in the PSSM matrix when lag = 1, expressed as
(A,A,1),(R,R,1),(N,N,1),⋯,(V,V,1) |
The last 20 columns represent the characteristics of the two lags in the PSSM matrix when lag = 2, expressed as
(A,A,2),(R,R,2),(N,N,2),⋯,(V,V,2) |
Feature selection is also known as variable selection or attribute selection, defined as the process of selecting the feature that contributes the most to the predictor variable or output of interest. After extracting features from the sequence, the MRMD2.0 algorithm is used for feature selection.
MRMD2.0 is based on the PageRank algorithm. It is not only a Python-based tool for reducing the dimensionality of data sets but also draws performance curves based on feature dimensions. Accuracy can be sacrificed for fewer features by selecting dimensions from the performance curve [49,50].
The basic idea of classification is to learn the parameters of the classifier through training data, and the goal of classification is to train the parameters of the classifier with the training set with the smallest loss of accuracy [51,52]. Weka (3.8.5) can be used for data mining and prediction models. There are many different classification modes, such as random forest and Bayesian classifiers [27,49,53].
The random forest algorithm is an ensemble algorithm, which is composed of multiple decision tree classifiers, and each subclassifier is a CART classification regression tree; therefore, classification and regression are performed using random forest. This algorithm is highly resistant to overfitting: the risk of overfitting can be reduced by averaging the decision tree [54,55]. The advantages are its simple implementation, high accuracy, fast training speed, strong anti-overfitting ability, and suitability as a benchmark model. The disadvantage is that the model is prone to overfitting on some sample sets with relatively large noise. When there are many decision trees, the training time and space will be relatively large.
Logistic regression is a machine learning method used to solve binary classification problems, and it is a generalized linear regression [56]. A hyperplane can be established to classify samples, which can be described by the following formula:
hθ(x)=g(θTx)=11+e−θTx | (2) |
where X is the sample x=[x1,x2,⋯,xn] is a n-dimensional vector, g is a logistic function, and the general form is g(z)=11+e−z. The advantages are that the computational cost is not high and it is easy to understand and implement, but the disadvantages are that it is easy to underfit and the classification accuracy may not be high.
In this article, k-fold cross-validation was chosen to test predictions. Specificity (SP), sensitivity (SN), and accuracy (ACC) [57,58,59,60,61] were used to evaluate our proposed method [62]; they can be expressed as:
SN=1−NprpoNpr | (3) |
SP=1−NpoprNpo | (4) |
ACC=1−Nprpo+NpoprNpr+Npo | (5) |
where Npr and Npo represent presynaptic neurotoxin and postsynaptic neurotoxin, respectively; Nprpo indicates that the presynaptic neurotoxin was incorrectly predicted as a postsynaptic neurotoxin; and Npopr indicates that a postsynaptic neurotoxin was incorrectly predicted as a presynaptic neurotoxin.
Previous studies have shown that feature extraction is very important for predictor variables, and optimized features can improve the accuracy of model prediction [63,64,65,66,67]. Especially in some high-dimensional data, there may be some noise and redundant information, which will have some negative effects on the prediction.
In this section, the prediction results of monoMonokGap, AC-PSSM, 188D characteristics, Geary-related characteristics, and amino acid composition (AAC) [68,69] characteristics under logistic regression and random forest were compared. The results are shown in Tables 1 and 2 (where the maximum value is indicated in bold). It can be seen from Tables 1 and 2 that the monoMonokGap feature used in this model has the best performance in all indicators. Under the random forest algorithm, when kGap = 7, monoMonokGap (kGap = 7) performs the best; Under the logistic regression algorithm, when kGap = 9, monoMonokGap (kGap = 9) achieves the best result. Compared with the two results, kGap = 9 had the best performance on LR, among which the values of ACC, AUC, SP, and SN were 99.6078%, 0.996, 0.998, and 0.996, respectively. This also proves the effectiveness of monoMonokGap (kGap = 9), so we choose monoMonokGap (kGap = 9) as the feature expression method of the model.
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.910 | 0.870 | 0.966 | 90.9804 |
monoMonokGap (kGap=3) | 0.941 | 0.912 | 0.985 | 94.1176 |
monoMonokGap (kGap=5) | 0.945 | 0.920 | 0.992 | 94.5098 |
monoMonokGap (kGap=7) | 0.957 | 0.936 | 0.991 | 95.6863 |
monoMonokGap (kGap=9) | 0.949 | 0.917 | 0.992 | 94.902 |
AC-PSSM | 0.839 | 0.776 | 0.868 | 83.9216 |
188D | 0.710 | 0.609 | 0.794 | 70.9804 |
Geary | 0.533 | 0.417 | 0.481 | 53.3333 |
AAC | 0.686 | 0.602 | 0.736 | 68.6275 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.929 | 0.901 | 0.971 | 92.9412 |
monoMonokGap (kGap=3) | 0.925 | 0.889 | 0.988 | 92.549 |
monoMonokGap (kGap=5) | 0.984 | 0.986 | 0.994 | 98.4314 |
monoMonokGap (kGap=7) | 0.973 | 0.980 | 0.995 | 97.2549 |
monoMonokGap (kGap=9) | 0.996 | 0.998 | 0.996 | 99.6078 |
AC-PSSM | 0.780 | 0.663 | 0.789 | 78.040 |
188D | 0.631 | 0.37 | 0.603 | 63.1373 |
Geary | 0.627 | 0.428 | 0.684 | 62.7451 |
AAC | 0.667 | 0.460 | 0.637 | 66.6667 |
To further verify the stability of monoMonokGap, we tested it with an independent test set [70] and compared it with AC-PSSM, 188D characteristics, Geary-related characteristics, AAC and other feature expression methods. The results are shown in Tables 3 and 4. Among them, 80% of the randomly selected data sets are used to train the prediction model, and the remaining 20% are used to test the model.
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.824 | 0.828 | 0.931 | 82.3592 |
monoMonokGap (kGap=3) | 0.882 | 0.835 | 0.894 | 88.22353 |
monoMonokGap (kGap=5) | 0.980 | 0.989 | 0.985 | 98.0392 |
monoMonokGap (kGap=7) | 0.961 | 0.979 | 0.998 | 96.0784 |
monoMonokGap (kGap=9) | 0.941 | 0.968 | 0.977 | 94.1176 |
AC-PSSM | 0.686 | 0.677 | 0.709 | 68.6275 |
188D | 0.373 | 0.658 | 0.608 | 37.2549 |
Geary | 0.588 | 0.397 | 0.396 | 58.8235 |
AAC | 0.604 | 0.445 | 0.548 | 60.3774 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.804 | 0.742 | 0.872 | 80.3922 |
monoMonokGap (kGap=3) | 0.804 | 0.716 | 0.923 | 80.3922 |
monoMonokGap (kGap=5) | 0.902 | 0.846 | 0.966 | 90.1961 |
monoMonokGap (kGap=7) | 0.882 | 0.810 | 0.977 | 88.2353 |
monoMonokGap (kGap=9) | 0.922 | 0.881 | 0.973 | 92.1569 |
AC-PSSM | 0.725 | 0.598 | 0.697 | 72.549 |
188D | 0.510 | 0.707 | 0.786 | 50.9804 |
Geary | 0.529 | 0.390 | 0.412 | 52.9412 |
AAC | 0.566 | 0.501 | 0.553 | 56.6038 |
It can be seen from Tables 3 and 4 that, compared with other feature expression methods, the monoMonokGap feature expression method selected in this article has little difference in the results of independent verification set to test and the results of ten-fold cross-validation, which also proves that the feature expression method selected in this paper does not have over-fitting and has good stability.
In this section, the performance of the model constructed in this article is compared with RF, Logical Model Tree (LMT), J48, Bayesnet, NaiveBayes, Sequential Minimal Optimization (SMO), and other classifiers. The results of ten-fold cross-validation are shown in Table 5 (the maximum value is indicated in bold). It can be seen from Table 5 that the results of the model constructed in this paper are significantly better than those of other classifiers in terms of various indicators, with the values of ACC, AUC, SP, and SN being 99.6078%, 0.996, 0.998 and 0.996 respectively. This also proves the validity of the model constructed in this article.
Classifier | SN | SP | AUC | ACC (%) |
Neu_LR | 0.996 | 0.998 | 0.996 | 99.6078 |
RF | 0969 | 0.948 | 0.995 | 96.8627 |
LMT | 0.918 | 0.884 | 0.971 | 91.7647 |
J48 | 0.863 | 0.804 | 0.877 | 86.2745 |
BayesNet | 0.929 | 0.891 | 0.987 | 92.9412 |
NaiveBayes | 0.863 | 0.779 | 0.945 | 86.2745 |
SMO | 0.973 | 0.965 | 0.969 | 97.2549 |
To further verify the robustness of the model constructed in this article, we conducted independent test set verification on it and compared its performance with RF, LMT, J48, Bayesnet, NaiveBayes, SMO, and other classifiers. The results are shown in Table 6. Among them, 80% of the randomly selected data sets are used to train the prediction model, and the remaining 20% are used to test the model.
Classifier | SN | SP | AUC | ACC (%) |
Neu_LR | 0.941 | 0.968 | 0.977 | 94.1176 |
RF | 0.922 | 0.881 | 0.972 | 92.1569 |
LMT | 0.784 | 0.756 | 0.886 | 78.4314 |
J48 | 0.843 | 0.788 | 0.824 | 84.3137 |
BayesNet | 0.902 | 0.846 | 0.948 | 90.1961 |
NaiveBayes | 0.804 | 0.691 | 0.920 | 80.3922 |
SMO | 0.922 | 0.932 | 0.927 | 92.1569 |
It can be seen from Table 6 that, compared to other algorithms, the model constructed in this paper achieves the best prediction article, and the values of ACC, AUC, SP, and SN are 94.1176%, 0.977, 0.968, and 0.941, respectively. Moreover, compared with the results of 10-fold cross-validation, the difference is very small, which also proves that the prediction performance of the model constructed in this article does not exist overfitting and has good stability.
This section compares the model constructed in this paper with other existing methods. The comparison results are shown in Table 7, where the results of ID [23] and ANOVA [7] are obtained directly from literature. It can be seen from Table 7 that the model Neu_LR constructed in this paper has the best performance in all indicators, among which ACC, SP, and SN reach the maximum value of 99.6078%, 0.998, and 0.996 respectively, and the effect is better than the other two methods, which also proves the effectiveness of the model Neu_LR constructed in this paper [71].
Method | SN | SP | ACC (%) |
ID [23] | 88.46 | 91.30 | 89.80 |
ANOVA [7] | 94.51 | 95.15 | 94.92 |
Neu_LR | 0.996 | 0.998 | 99.6078 |
The correct understanding of presynaptic and postsynaptic neurotoxins is an essential first step in the discovery of drug targets and drug design. And protein prediction mainly involves two aspects, feature extraction, and selection of classification algorithms. Therefore, the prediction model Neu_LR was constructed in this article. The monoMonokGap method was used to extract the frequency characteristics of presynaptic and postsynaptic neurotoxin sequences, and the feature selection was carried out. Then, based on the important features obtained after dimension-reduction, the logistic regression algorithm was used to construct the prediction model Neu_LR. In this paper, we use 10-fold cross-validation and independent test set validation to judge whether the Neu_LR model is good or not. In the 10-fold cross-validation, we achieved 99.6078% accuracy, and in the independent test set validation, we achieved 94.1176% accuracy, which shows that our model is feasible and effective.
This work was supported by the National Nature Science Foundation of China (Grant Nos 61863010, 11926205, 11926412, and 61873076), National Key R & D Program of China (No.2020YFB2104400) and Natural Science Foundation of Hainan, China (Grant Nos. 119MS036 and 120RC588), and Hainan Normal University 2020 Graduate Student Innovation Research Project (hsyx2020-41), the Special Science Foundation of Quzhou (2020D003).
All authors declare no conflicts of interest in this paper.
[1] |
Wang SS, Carreon JD, Gomez SL, et al. (2010) Cervical cancer incidence among 6 asian ethnic groups in the United States, 1996 through 2004. Cancer 116: 949-956. https://doi.org/10.1002/cncr.24843 ![]() |
[2] |
Singh GK, Jemal A (2017) Socioeconomic and Racial/Ethnic Disparities in Cancer Mortality, Incidence, and Survival in the United States, 1950-2014: Over Six Decades of Changing Patterns and Widening Inequalities. J Environ Public Health 2017: 2819372. https://doi.org/10.1155/2017/2819372 ![]() |
[3] | US Cancer Statistics Working GroupUS Cancer Statistics Data Visualizations Tool, based on 2020 submission data (1999−2018): US Department of Health and Human Services, Centers for Disease Control and Prevention and National Cancer Institute (2021). |
[4] |
Stelzle D, Tanaka LF, Lee KK, et al. (2021) Estimates of the global burden of cervical cancer associated with HIV. Lancet Glob Health 9: e161-e169. https://doi.org/10.1016/S2214-109X(20)30459-9 ![]() |
[5] |
Bosch FX, Manos MM, Muñoz N, et al. (1995) Prevalence of human papillomavirus in cervical cancer: a worldwide perspective. International biological study on cervical cancer (IBSCC) Study Group. J Natl Cancer Inst 87: 796-802. https://doi.org/10.1097/00006254-199510000-00015 ![]() |
[6] | Miller BA, Kolonel LN, Bernstein L, et al. (1996) Racial/Ethnic Patterns of Cancer in the United States 1988-1992.National Cancer Institute. |
[7] |
Gottvall M, Tydén T, Höglund AT, et al. (2010) Knowledge of human papillomavirus among high school students can be increased by an educational intervention. Int J STD AIDS 21: 558-562. https://doi.org/10.1258/ijsa.2010.010063 ![]() |
[8] |
Kim HW, Lee YJ, Lee DB, et al. (2019) Effects of cervical cancer prevention education in middle-school girls in Korea: A mixed-method study. Heliyon 5: e01826. https://doi.org/10.1016/j.heliyon.2019.e01826 ![]() |
[9] | . Available from: https://seer.cancer.gov/statfacts/html/cervix.html, cited on 11th March 2022 |
[10] |
Taylor VM, Nguyen TT, et al. (2008) Cervical cancer control research in Vietnamese American communities. Cancer Epidemiol Biomarkers Prev 17: 2924-2930. https://doi.org/10.1158/1055-9965.EPI-08-0386 ![]() |
[11] |
Mock J, McPhee SJ, Nguyen T, et al. (2007) Effective lay health worker outreach and media-based education for promoting cervical cancer screening among Vietnamese American women. Am J Public Health 97: 1693-1700. https://doi.org/10.2105/AJPH.2006.086470 ![]() |
[12] |
Bingham A, Drake JK, LaMontagne DS, et al. (2009) Sociocultural issues in the introduction of human papillomavirus vaccine in low-resource settings. Arch Pediatr Adolesc Med 163: 455-461. https://doi.org/10.1001/archpediatrics.2009.50 ![]() |
[13] |
Uzunlar Ö, Özyer Ş, Başer E, et al. (2013) A survey on human papillomavirus awareness and acceptance of vaccination among nursing students in a tertiary hospital in Ankara, Turkey. Vaccine 31: 2191-2195. https://doi.org/10.1016/j.vaccine.2013.01.033 ![]() |
[14] |
Donadiki EM, Jiménez-García R, Hernández-Barrera V, et al. (2013) Knowledge of the HPV vaccine and its association with vaccine uptake among female higher-education students in Greece. Hum Vaccin Immunother 9: 300-305. https://doi.org/10.4161/hv.22548 ![]() |
[15] |
Lee YS, Hofstetter CR, Irvin VL, et al. (2012) Korean American women's preventive health care practices: stratified samples in California, USA. Health Care Women Int 33: 422-439. https://doi.org/10.1080/07399332.2011.603869 ![]() |
[16] |
Ma GX, Gao W, Fang CY, et al. (2013) Health beliefs associated with cervical cancer screening among Vietnamese Americans. J Womens Health (Larchmt) 22: 276-288. https://doi.org/10.1089/jwh.2012.3587 ![]() |
[17] |
De Alba I, Sweningson JM, Chandy C, et al. (2004) Impact of English language proficiency on receipt of pap smears among Hispanics. J Gen Intern Med 19: 967-970. https://doi.org/10.1007/s11606-004-0009-9 ![]() |
[18] |
Nguyen NY, Okeke E, Anglemyer A, et al. (2020) Identifying Perceived Barriers to Human Papillomavirus Vaccination as a Preventative Strategy for Cervical Cancer in Nigeria. Ann Glob Health 86: 118. https://doi.org/10.5334/aogh.2890 ![]() |
1. | Hao Wan, Qing Liu, Ying Ju, Utilize a few features to classify presynaptic and postsynaptic neurotoxins, 2023, 152, 00104825, 106380, 10.1016/j.compbiomed.2022.106380 | |
2. | Muhammad Nabeel Asim, Muhammad Ali Ibrahim, Ahtisham Fazeel, Andreas Dengel, Sheraz Ahmed, DNA-MP: a generalized DNA modifications predictor for multiple species based on powerful sequence encoding method, 2023, 24, 1467-5463, 10.1093/bib/bbac546 | |
3. | Hongwei Zhang, Yan Shi, Yapeng Wang, Xu Yang, Kefeng Li, Sio-Kei Im, Yu Han, LMFE: A Novel Method for Predicting Plant LncRNA Based on Multi-Feature Fusion and Ensemble Learning, 2025, 16, 2073-4425, 424, 10.3390/genes16040424 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.910 | 0.870 | 0.966 | 90.9804 |
monoMonokGap (kGap=3) | 0.941 | 0.912 | 0.985 | 94.1176 |
monoMonokGap (kGap=5) | 0.945 | 0.920 | 0.992 | 94.5098 |
monoMonokGap (kGap=7) | 0.957 | 0.936 | 0.991 | 95.6863 |
monoMonokGap (kGap=9) | 0.949 | 0.917 | 0.992 | 94.902 |
AC-PSSM | 0.839 | 0.776 | 0.868 | 83.9216 |
188D | 0.710 | 0.609 | 0.794 | 70.9804 |
Geary | 0.533 | 0.417 | 0.481 | 53.3333 |
AAC | 0.686 | 0.602 | 0.736 | 68.6275 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.929 | 0.901 | 0.971 | 92.9412 |
monoMonokGap (kGap=3) | 0.925 | 0.889 | 0.988 | 92.549 |
monoMonokGap (kGap=5) | 0.984 | 0.986 | 0.994 | 98.4314 |
monoMonokGap (kGap=7) | 0.973 | 0.980 | 0.995 | 97.2549 |
monoMonokGap (kGap=9) | 0.996 | 0.998 | 0.996 | 99.6078 |
AC-PSSM | 0.780 | 0.663 | 0.789 | 78.040 |
188D | 0.631 | 0.37 | 0.603 | 63.1373 |
Geary | 0.627 | 0.428 | 0.684 | 62.7451 |
AAC | 0.667 | 0.460 | 0.637 | 66.6667 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.824 | 0.828 | 0.931 | 82.3592 |
monoMonokGap (kGap=3) | 0.882 | 0.835 | 0.894 | 88.22353 |
monoMonokGap (kGap=5) | 0.980 | 0.989 | 0.985 | 98.0392 |
monoMonokGap (kGap=7) | 0.961 | 0.979 | 0.998 | 96.0784 |
monoMonokGap (kGap=9) | 0.941 | 0.968 | 0.977 | 94.1176 |
AC-PSSM | 0.686 | 0.677 | 0.709 | 68.6275 |
188D | 0.373 | 0.658 | 0.608 | 37.2549 |
Geary | 0.588 | 0.397 | 0.396 | 58.8235 |
AAC | 0.604 | 0.445 | 0.548 | 60.3774 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.804 | 0.742 | 0.872 | 80.3922 |
monoMonokGap (kGap=3) | 0.804 | 0.716 | 0.923 | 80.3922 |
monoMonokGap (kGap=5) | 0.902 | 0.846 | 0.966 | 90.1961 |
monoMonokGap (kGap=7) | 0.882 | 0.810 | 0.977 | 88.2353 |
monoMonokGap (kGap=9) | 0.922 | 0.881 | 0.973 | 92.1569 |
AC-PSSM | 0.725 | 0.598 | 0.697 | 72.549 |
188D | 0.510 | 0.707 | 0.786 | 50.9804 |
Geary | 0.529 | 0.390 | 0.412 | 52.9412 |
AAC | 0.566 | 0.501 | 0.553 | 56.6038 |
Classifier | SN | SP | AUC | ACC (%) |
Neu_LR | 0.996 | 0.998 | 0.996 | 99.6078 |
RF | 0969 | 0.948 | 0.995 | 96.8627 |
LMT | 0.918 | 0.884 | 0.971 | 91.7647 |
J48 | 0.863 | 0.804 | 0.877 | 86.2745 |
BayesNet | 0.929 | 0.891 | 0.987 | 92.9412 |
NaiveBayes | 0.863 | 0.779 | 0.945 | 86.2745 |
SMO | 0.973 | 0.965 | 0.969 | 97.2549 |
Classifier | SN | SP | AUC | ACC (%) |
Neu_LR | 0.941 | 0.968 | 0.977 | 94.1176 |
RF | 0.922 | 0.881 | 0.972 | 92.1569 |
LMT | 0.784 | 0.756 | 0.886 | 78.4314 |
J48 | 0.843 | 0.788 | 0.824 | 84.3137 |
BayesNet | 0.902 | 0.846 | 0.948 | 90.1961 |
NaiveBayes | 0.804 | 0.691 | 0.920 | 80.3922 |
SMO | 0.922 | 0.932 | 0.927 | 92.1569 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.910 | 0.870 | 0.966 | 90.9804 |
monoMonokGap (kGap=3) | 0.941 | 0.912 | 0.985 | 94.1176 |
monoMonokGap (kGap=5) | 0.945 | 0.920 | 0.992 | 94.5098 |
monoMonokGap (kGap=7) | 0.957 | 0.936 | 0.991 | 95.6863 |
monoMonokGap (kGap=9) | 0.949 | 0.917 | 0.992 | 94.902 |
AC-PSSM | 0.839 | 0.776 | 0.868 | 83.9216 |
188D | 0.710 | 0.609 | 0.794 | 70.9804 |
Geary | 0.533 | 0.417 | 0.481 | 53.3333 |
AAC | 0.686 | 0.602 | 0.736 | 68.6275 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.929 | 0.901 | 0.971 | 92.9412 |
monoMonokGap (kGap=3) | 0.925 | 0.889 | 0.988 | 92.549 |
monoMonokGap (kGap=5) | 0.984 | 0.986 | 0.994 | 98.4314 |
monoMonokGap (kGap=7) | 0.973 | 0.980 | 0.995 | 97.2549 |
monoMonokGap (kGap=9) | 0.996 | 0.998 | 0.996 | 99.6078 |
AC-PSSM | 0.780 | 0.663 | 0.789 | 78.040 |
188D | 0.631 | 0.37 | 0.603 | 63.1373 |
Geary | 0.627 | 0.428 | 0.684 | 62.7451 |
AAC | 0.667 | 0.460 | 0.637 | 66.6667 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.824 | 0.828 | 0.931 | 82.3592 |
monoMonokGap (kGap=3) | 0.882 | 0.835 | 0.894 | 88.22353 |
monoMonokGap (kGap=5) | 0.980 | 0.989 | 0.985 | 98.0392 |
monoMonokGap (kGap=7) | 0.961 | 0.979 | 0.998 | 96.0784 |
monoMonokGap (kGap=9) | 0.941 | 0.968 | 0.977 | 94.1176 |
AC-PSSM | 0.686 | 0.677 | 0.709 | 68.6275 |
188D | 0.373 | 0.658 | 0.608 | 37.2549 |
Geary | 0.588 | 0.397 | 0.396 | 58.8235 |
AAC | 0.604 | 0.445 | 0.548 | 60.3774 |
Method | SN | SP | AUC | ACC (%) |
monoMonokGap (kGap=1) | 0.804 | 0.742 | 0.872 | 80.3922 |
monoMonokGap (kGap=3) | 0.804 | 0.716 | 0.923 | 80.3922 |
monoMonokGap (kGap=5) | 0.902 | 0.846 | 0.966 | 90.1961 |
monoMonokGap (kGap=7) | 0.882 | 0.810 | 0.977 | 88.2353 |
monoMonokGap (kGap=9) | 0.922 | 0.881 | 0.973 | 92.1569 |
AC-PSSM | 0.725 | 0.598 | 0.697 | 72.549 |
188D | 0.510 | 0.707 | 0.786 | 50.9804 |
Geary | 0.529 | 0.390 | 0.412 | 52.9412 |
AAC | 0.566 | 0.501 | 0.553 | 56.6038 |
Classifier | SN | SP | AUC | ACC (%) |
Neu_LR | 0.996 | 0.998 | 0.996 | 99.6078 |
RF | 0969 | 0.948 | 0.995 | 96.8627 |
LMT | 0.918 | 0.884 | 0.971 | 91.7647 |
J48 | 0.863 | 0.804 | 0.877 | 86.2745 |
BayesNet | 0.929 | 0.891 | 0.987 | 92.9412 |
NaiveBayes | 0.863 | 0.779 | 0.945 | 86.2745 |
SMO | 0.973 | 0.965 | 0.969 | 97.2549 |
Classifier | SN | SP | AUC | ACC (%) |
Neu_LR | 0.941 | 0.968 | 0.977 | 94.1176 |
RF | 0.922 | 0.881 | 0.972 | 92.1569 |
LMT | 0.784 | 0.756 | 0.886 | 78.4314 |
J48 | 0.843 | 0.788 | 0.824 | 84.3137 |
BayesNet | 0.902 | 0.846 | 0.948 | 90.1961 |
NaiveBayes | 0.804 | 0.691 | 0.920 | 80.3922 |
SMO | 0.922 | 0.932 | 0.927 | 92.1569 |
Method | SN | SP | ACC (%) |
ID [23] | 88.46 | 91.30 | 89.80 |
ANOVA [7] | 94.51 | 95.15 | 94.92 |
Neu_LR | 0.996 | 0.998 | 99.6078 |