Xinjiang is a typical arid and semi-arid Mountain basin system, which make the regional ecosystem extremely fragile. Studying the influence of climate on vegetation is conducive to qualitatively analyze the change trend of vegetation coverage in this region. Therefore, utilizing vegetation coverage and main meteorological elements (temperature, precipitation, relative humidity, sunshine hours) data in Xinjiang province, this paper carried out the influence of multiple meteorological elements on vegetation coverage changes, and constructed a model of the impact of multiple meteorological elements on the growing season vegetation coverage based on random forest. The model can better simulate the vegetation coverage in 2017 and 2018, with an average error of 0.027, in consequence it can well forecast whether the vegetation is high-density or low-density in this area. Correlation analysis and variable importance show that the critical meteorological factors affecting vegetation cover change are relative humidity and sunshine hours, accounting for 73% of the vegetation coverage area. The results are helpful to understand how meteorological factors affect the vegetation coverage, and then provide a theoretical reference for the construction of ecological security in Xinjiang.
Citation: Huimin Bai, Li Li, Yongping Wu, Chen Liu, Zhiqiang Gong, Guolin Feng, Gui-Quan Sun. Study on the influence of meteorological elements on growing season vegetation coverage in Xinjiang, China[J]. Electronic Research Archive, 2022, 30(9): 3463-3480. doi: 10.3934/era.2022177
Related Papers:
[1]
Sathyanarayanan Gopalakrishnan, Swaminathan Venkatraman .
Prediction of influential proteins and enzymes of certain diseases using a directed unimodular hypergraph. Mathematical Biosciences and Engineering, 2024, 21(1): 325-345.
doi: 10.3934/mbe.2024015
[2]
Boyang Wang, Wenyu Zhang .
MARnet: multi-scale adaptive residual neural network for chest X-ray images recognition of lung diseases. Mathematical Biosciences and Engineering, 2022, 19(1): 331-350.
doi: 10.3934/mbe.2022017
[3]
Feng Wang, Xiaochen Feng, Ren Kong, Shan Chang .
Generating new protein sequences by using dense network and attention mechanism. Mathematical Biosciences and Engineering, 2023, 20(2): 4178-4197.
doi: 10.3934/mbe.2023195
[4]
Boyang Wang, Wenyu Zhang .
ACRnet: Adaptive Cross-transfer Residual neural network for chest X-ray images discrimination of the cardiothoracic diseases. Mathematical Biosciences and Engineering, 2022, 19(7): 6841-6859.
doi: 10.3934/mbe.2022322
[5]
Yongyin Han, Maolin Liu, Zhixiao Wang .
Key protein identification by integrating protein complex information and multi-biological features. Mathematical Biosciences and Engineering, 2023, 20(10): 18191-18206.
doi: 10.3934/mbe.2023808
[6]
Shun Li, Lu Yuan, Yuming Ma, Yihui Liu .
WG-ICRN: Protein 8-state secondary structure prediction based on Wasserstein generative adversarial networks and residual networks with Inception modules. Mathematical Biosciences and Engineering, 2023, 20(5): 7721-7737.
doi: 10.3934/mbe.2023333
[7]
Shuai Cao, Biao Song .
Visual attentional-driven deep learning method for flower recognition. Mathematical Biosciences and Engineering, 2021, 18(3): 1981-1991.
doi: 10.3934/mbe.2021103
[8]
Jinmiao Song, Shengwei Tian, Long Yu, Qimeng Yang, Qiguo Dai, Yuanxu Wang, Weidong Wu, Xiaodong Duan .
RLF-LPI: An ensemble learning framework using sequence information for predicting lncRNA-protein interaction based on AE-ResLSTM and fuzzy decision. Mathematical Biosciences and Engineering, 2022, 19(5): 4749-4764.
doi: 10.3934/mbe.2022222
[9]
Peter Hinow, Edward A. Rietman, Sara Ibrahim Omar, Jack A. Tuszyński .
Algebraic and topological indices of molecular pathway networks in human cancers. Mathematical Biosciences and Engineering, 2015, 12(6): 1289-1302.
doi: 10.3934/mbe.2015.12.1289
Xinjiang is a typical arid and semi-arid Mountain basin system, which make the regional ecosystem extremely fragile. Studying the influence of climate on vegetation is conducive to qualitatively analyze the change trend of vegetation coverage in this region. Therefore, utilizing vegetation coverage and main meteorological elements (temperature, precipitation, relative humidity, sunshine hours) data in Xinjiang province, this paper carried out the influence of multiple meteorological elements on vegetation coverage changes, and constructed a model of the impact of multiple meteorological elements on the growing season vegetation coverage based on random forest. The model can better simulate the vegetation coverage in 2017 and 2018, with an average error of 0.027, in consequence it can well forecast whether the vegetation is high-density or low-density in this area. Correlation analysis and variable importance show that the critical meteorological factors affecting vegetation cover change are relative humidity and sunshine hours, accounting for 73% of the vegetation coverage area. The results are helpful to understand how meteorological factors affect the vegetation coverage, and then provide a theoretical reference for the construction of ecological security in Xinjiang.
1.
Introduction
The proteins contained in the biological membrane are called membrane proteins, which play a lead role in maintaining many life activities, including but not limited to cell proliferation and differentiation, energy transformation, signal transduction and material transportation. As we know, a membrane protein's functions are significantly associated with its type, so it is important to identify the types of membrane proteins [1]. Membrane proteins can be grouped into eight types [2]: Single-span type 1, single-span type 2, single-span type 3, single-span type 4, multi-span, lipid-anchor, glycosylphosphatidylinositol (GPI)-anchor and peripheral.
There exists many different computational methods that can be used for identifying the types of proteins. Chou and Elrod [3] used the covariant discriminant algorithm (CDA) according to amino acid composition (AAC) to predict membrane protein types. To address the challenge posed by the large number of possible patterns in protein sequences, Chou [4] introduced a pseudo-amino acid composition (PAAC). This composition combines a set of discrete sequence correlation factors with the 20 components of the traditional amino acid composition. Wang et al. [5] utilized the pseudo amino acid composition to incorporate sequence-order effects and introduced spectral analysis for representing the statistical sample of a protein. The weighted support vector machine (SVM) algorithm was applied. Liu et al. [6] introduced the low-frequency Fourier spectrum analysis based on the concept of PAAC, which effectively incorporates sequence patterns into discrete components and enables existing prediction algorithms to be applied directly to protein samples. Chou and Shen [7] developed a two-layer predictor for classifying proteins as membrane or non-membrane. If a protein is classified as membrane, the process continues with a second-layer prediction engine to determine its specific type from eight categories. The predictor stands out for its incorporation of evolutionary information through pseudo position-specific score matrix (Pse-PSSM) vectors and its ensemble classifier consisting of multiple optimized evidence-theoretic K-nearest neighbor (OET-KNN) classifiers. Rezaei et al. [8] classified membrane proteins by applying wavelet analysis to their sequences and extracting informative features. These features were normalized and used as input for a cascaded model, which aimed to mitigate bias caused by differences in membrane protein class sizes in the dataset. Wang et al. [9] utilized the dipeptide composition (DC) method to represent proteins as high-dimensional feature vectors. They introduced the neighborhood preserving embedding (NPE) algorithm for linear dimensionality reduction and to extract essential features from the high-dimensional DC space. The reduced low-dimensional features were then employed with the K-nearest neighbor (K-NN) classifier to accurately classify membrane protein types. Hayat and Khan [10] integrated composite protein sequence features (CPSR) with the PAAC to classify membrane protein. They further proposed using split amino acid composition (SAAC) and ensemble classification [11] and still further fused position specific scoring matrix (PSSM) and SAAC [12] to classify membrane protein. Chen and Li [13] introduced a novel computational classifier designed for the prediction of membrane protein types using protein sequences. The classifier was constructed based on a collection of one-versus-one SVMs and incorporated various sequence attributes. Han et al. [14] integrated amino acid classifications and physicochemical properties in PAAC and used a two-stage multiclass SVM to classify membrane protein. Wan et al. [15] retrieved the associated gene ontology (GO) information of a query membrane protein by searching a compact GO-term database with its homologous accession number. Subsequently, they employed a multi-label elastic net (EN) classifier to classify the membrane protein based on this information. Lu et al. [16] used a dynamic deep network architecture that was based on lifelong learning for the classification of membrane protein. Wang et al. [17] introduced a new support bio-sequence machine, which used SVM for protein classification.
In conclusion, most of the above models used different computational methods to represent membrane proteins and then used classification algorithms to identify membrane protein types. Most of the models mentioned above have varied types of feature input formats, which are shown in Table 1.
Table 1.
Varied types of feature input formats of different methods.
However, it is noted that more than two proteins are linked by non-covalent interactions [18,19] in real practice, and the representation of proteins is multi-modal. Traditional computational methods for identifying membrane protein types tend to ignore those two issues, which leads to information loss since the high-order correlation among membrane proteins and the scenarios of multi-modal representations of membrane proteins are ignored.
To tackle those problems, in this paper we use a deep residual hypergraph neural network (DRHGNN) [20] to further learn about the representations of membrane proteins and to eventually achieve accurate identification of membrane proteins' types.
First, each membrane protein is represented by the extracted features. Here, five feature extraction methods are employed based on the PSSM of membrane protein sequence [2], including average blocks (AvBlock), discrete cosine transform (DCT), discrete wavelet transform (DWT), histogram of oriented gradient (HOG) and PsePSSM. Five types of features are extracted accordingly. Second, each feature type generates a hypergraph G represented by an incidence matrix H modeling complex high-order correlation. Five types of features and corresponding incidence matrix H are concatenated, respectively, which overcomes the scenarios of multi-modal representations of membrane proteins. Lastly, concatenated features and fused incidence matrix are input into a DRHGNN to classify the various types of membrane proteins. To assess the performance of DRHGNN, we perform tests on membrane proteins' four distinct datasets. In the task of membrane proteins classification, the model achieves better performance.
2.
Materials and methods
In order to extract features of membrane proteins, we employ AvBlock, DCT, DWT, HOG and PsePSSM [2] to achieve feature extraction based on membrane protein sequence's PSSM. Each type of PSSM-based feature is used to generate a hypergraph that can be represented by an incidence matrix H, then five types of features and their corresponding H are concatenated, respectively, and both are fed into a DRHGNN [20,21,22] to identify the types of membrane proteins. Figure 1 depicts the schematic diagram.
Figure 1.
The schematic diagram of our proposed method.
We judge the performance of DRHGNN on the classification of membrane proteins based on four datasets, namely, Dataset 1, Dataset 2, Dataset 3 and Dataset 4.
Dataset 1 is directly sourced from Chou's work [7], where protein sequences are sourced from the Swiss-Prot [23] database. Chou and Shen [7] employed a percentage distribution method to randomly assign the protein sequences into both the training set and the testing set. This was done to ensure a balanced number of sequences between the two sets. Dataset 1 consists of 7582 membrane proteins from eight types and the same training/testing split as [7], where 3,249 membrane proteins are employed for training, with the remaining 4,333 employed for testing.
Dataset 2 was created by removing redundant and highly similar sequences from Dataset 1. This resulted in a curated dataset with reduced homology, specifically ensuring that no pair of proteins shared a sequence identity greater than 40%. The training set of Dataset 2 was obtained by removing redundant sequences from Dataset 1's training set. Similarly, the testing set of Dataset 2 was prepared by eliminating redundant sequences and those with high sequence identity to the training set. Dataset 2 consists of 4594 membrane proteins from eight types and the same training/testing split as [13], where 2288 membrane proteins are employed for training, with the remaining 2306 membrane proteins employed for testing.
To update and expand the datasets, Chen and Li [13] created Dataset 3 through the following steps. Initially, membrane protein sequences were obtained from the Swiss-Prot [23] database using the "protein subcellular localization" annotation. Stringent exclusion criteria was applied to ensure dataset quality: 1) Exclusion of fragmented proteins or those shorter than 50 amino acid residues; 2) removal of proteins with non-experimental qualifiers or multiple topologies in their annotations; 3) elimination of homologous sequences with a sequence identity greater than 40% using clustering database at high identity with tolerance (CD-hit) [24]. Subsequently, the sequences were categorized into their respective membrane protein types based on topology annotations. To generate the training and testing sets, a random assignment was performed employing the above-mentioned percentage distribution method. Consequently, Dataset 3 was created, providing an updated and expanded dataset of membrane protein sequences characterized by enhanced quality and classification. Dataset 3 consists of 6677 membrane proteins from eight types and the same training/testing split as [13], where 3,073 membrane proteins are employed for training and 3604 for testing.
Dataset 4 is directly sourced from Chou's work [3], where protein sequences are sourced from the Swiss-Prot [23] database. The training and testing sets were obtained after protein sequences were screened with three procedures. Dataset 4 consists of 4684 membrane proteins from five types and the same training/testing split as [3], where 2059 membrane proteins are used for training and 2625 membrane proteins are employed for testing. Table 2 outlines the details of the datasets.
Table 2.
The scale of training and testing samples in four different membrane proteins' datasets.
We use the same membrane protein features as [2], which are extracted with five methods based on the PSSM of membrane proteins.
2.2. PSSM
The PSSM is a widely used tool in the field of bioinformatics for capturing evolutionary information encoded within membrane protein sequences. It is generated through multiple sequence alignment and database searching methods, such as position-specific iterated BLAST (PSIBLAST) program [25], to identify conserved residues and their positional probabilities.
The evolutionary information obtained from the PSSM is preserved within a matrix of size R 20 (R rows and 20 columns), presented as follows:
(2.1)
The numbers 1–20 denote one of the 20 different amino acids. R denotes the length of the membrane protein sequence.The element is calculated as follows:
(2.2)
represents the frequency of the k-th amino acid type at position i, and D(k, j) denotes the value derived from Dayhoff's mutation matrix (substitution matrix) for the k-th and j-th amino acid types. The utilization of these variables in the equation aims to incorporate amino acid frequency information and substitution probabilities.
2.3. AvBlock
AvBlock refers to a statistical measure employed in professional scientific research to analyze data sequences. Nowadays, AvBlock is a widely adopted approach for constructing matrix descriptors to represent protein sequences [26]. AvBlock is calculated by dividing the total length of a sequence by the average length of its individual consecutive blocks. Here, the PSSM matrix is partitioned into 20 blocks along the rows. Subsequently, each block is transformed into a feature vector of dimensionality 20 for the PSSM matrix.
2.4. DCT
The DCT [27] is a mathematical transform widely used in signal and image processing. Here, we employ a two-dimensional DCT (2D-DCT) for compressing the PSSM of proteins. The mathematical definition for the 2D-DCT is
(2.3)
(2.4)
(2.5)
where 0 and .
2.5. DWT
The DWT has been utilized to extract informative features from protein amino acid sequences, as initially introduced by Nanni et al. [28]. Here, we applied a 4-level DWT to preprocess the PSSM matrix. At each level we compute both the approximate and detailed coefficients for each column. We extract essential statistical features such as maximum, minimum, mean and standard deviation from both the approximate and detailed coefficients. Additionally, we capture the first five discrete cosine coefficients exclusively from the approximate coefficients. Therefore, for each of the 20 column dimensions, a total of features are obtained at each level.
2.6. HOG
The HOG is a feature descriptor used in computer vision and image processing for object detection and recognition. Here, we propose a method to reduce redundancy in protein data using the HOG algorithm. We consider the PSSM as an image-like matrix representation. First, we compute the horizontal and vertical gradients of the PSSM to obtain the gradient magnitude and direction matrices. These matrices are then partitioned into 25 sub-matrices that incorporate both the gradient magnitude and direction information. Subsequently, we generate 10 distinct histogram channels for each sub-matrix based on its gradient direction. This approach effectively reduces redundancy by providing a compact representation of the protein data while preserving important spatial information.
2.7. PsePSSM
The PsePSSM is a commonly utilized matrix descriptor in protein research [7]. It is specifically designed to preserve the essential information contained in the PSSM by considering the incorporation of PAAC. The PsePSSM descriptor is formulated as follows:
(2.6)
where lag refers to the distance between a residue and its neighboring residues. The formula of is
(2.7)
where refers to the normalized version of .
2.8. DRHGNN
2.8.1. Hypergraph learning statement
In a basic graph, the samples are depicted as vertexes, and two connected vertexes are joined by an edge [29,30]. However, the data structure in practical applications may go beyond pair connections and may even be multi-modal. Accordingly, the hypergraph was proposed. Unlike the simple graph, a hypergraph comprises a vertex set and one or more hyperedge set(s) composed of two or more vertexes, as shown in Figure 2. A hypergraph is represented by G = (V, E, W), where V represents a vertex set, and E represents a hyperedge set. W, a diagonal matrix of edge weights, assigns weights to each hyperedge. The incidence matrix H, where H is the incidence matrix with entries defined as
(2.8)
Figure 2.
The comparison between graph and hypergraph.
Here, we could take the membrane protein classification task on the hypergraph because more than two proteins are linked by non-covalent interactions [18,19]. can represent the features of N membrane proteins data. The hyperedge is constructed using the Euclidean distance, which calculates the distance expressed with between two features. In the hyperedge construction, each vertex represents a membrane protein, and then one central vertex and its K neighbors represent each hyperedge. As a result, N hyperedges containing K+1 vertexes are generated. Here, more specifically, each time we select one vertex in the dataset as the centroid, we use K nearest neighbors in the selected feature space to generate one hyperedge, which includes the centroid itself, as illustrated in Figure 3. Thus, a hypergraph with N hyperedges is constructed with a single-modal representation of membrane proteins. The hypergraph is denoted by an incidence matrix , with Nx(K+1) nonzero entries denoting while the others equal zero.
Figure 3.
The schematic diagram of hyperedge generation and hypergraph generation.
In the case of multi-modal representations of membrane proteins, each incidence matrix is constructed according to each modality membrane representation. After all the incidence matrix have been generated, these can be concatenated to generate the incidence matrix of a multi-modality hypergraph. Thus, a hypergraph is constructed with multi-modal representations of membrane proteins shown in Figure 3, so it is noted that the flexibility of hypergraph generation has great expansibility toward multi-modal features.
2.8.2. Hypergraph convolution
Feng et al. [31] first proposed the HGNN. They built a hyperedge convolution layer whose formulation is
(2.9)
where represents the hypergraph's signal at the lth layer with N nodes and C dimensional features, W is regarded as the weight of all hyperedges and represents the parameter that is learned during the training process at the lth layer. represents the nonlinear activation function. is the vertex degrees' diagonal matrix, while is the edge degrees' diagonal matrix [30].
We define hypergraph Laplacian , then a hyperedge convolution layer is formulated as .
A hyperedge convolution layer achieves node-edge-node transform, which can refine a better representation of nodes and extract the high-order correlation from a hypergraph more efficiently.
2.8.3. Residual hypergraph convolution
Feng et al. [31] used two hyperedge convolution layers and then used the softmax function to obtain predicted labels. However, the performance of HGNN drops as the number of layers increases because of the over-smoothing issue.
To resolve the issue of over-smoothing, Huang et al. [20] and Chen et al. [22] used two simple and effective techniques, Initial residual and identity mapping, based on their shallow model. Inspired by their method, we upgrade the HGNN by introducing initial residual and identity mapping to prevent over-smoothing and enjoy accuracy increase from increased depth.
● Initial residual
Chen et al. [22] constructed a connection to the initial representation to relieve the over-smoothing problem. The initial residual connection guarantees that each node's final representation retains at least a proportion of the input feature regardless of how many layers we stack.
Gasteiger et al. [32] proposed approximate personalized propagation of neural predictions (APPNP), which employed a linear combination between different layers to the initial residual connection and gathered information from multi-hop neighbors instead of expanding the number of neural network layers by separating feature transformation and propagation. Formally, APPNP's model is defined as
(2.10)
In practice, we can set .
● Identity mapping
However, APPNP remains a shallow model; thus, the initial residual alone cannot extend HGNN to a deep model. To resolve this issue, Chen et al. [22] added an identity matrix to the weight matrix according to the idea in ResNet of identity mapping, which ensures the DRHGNN model performs at least as well as its shallow version does.
Finally, a residual enhanced hyperedge convolution layer is formulated as
(2.11)
In practice, we set , where is a hyperparameter.
2.8.4. DRHGNN analysis
Figure 4 illustrates the detail of the DRHGNN. Those multi-types of node features and corresponding incidence matrix H modeling complex high-order correlation are concatenated, respectively, which overcomes the scenarios of multi-modal representations of membrane proteins. Then, concatenated features and incidence matrix are fed into DRHGNN to get nodes output labels and eventually achieve classification task. As detailed in the section mentioned above, we can build a residual enhanced hypergraph convolution layer, then we naively stack multiple residual hypergraph convolution blocks to tackle the problem of over-smoothing in HGNN and enjoy an accuracy increase. Additional linear transforms are incorporated into the model's first and last layer, and the residual hypergraph convolutions are utilized for information propagation. The deep embeddings are finally used for classification tasks.
Figure 4.
The DRHGNN framework. FC represents a fully connected layer.
The DRHGNN has numerous hyperparameters. Instead of comparing all the possible hyperparameters, which usually takes several days, we used empirically based hyperparameters, which are shown in Table 3.
The baseline results were replicated by their release codes, with hyperparameters adhering to the respective papers.
3.2. Metrics
We conducted accuracy calculations for predicting every type of membrane protein. We used accuracy (ACC), which measures the ratio of correctly predicted proteins to the total number of proteins in a specified dataset, to assess the performance of our model. The specific formula is
(3.1)
where n stands for the number of proteins that are correctly predicted in a specified dataset, and N stands for the total number of proteins present in the dataset.
In order to further evaluate the performance of models, we also incorporated F1-score and Mathew's correlation coefficient (MCC) as evaluation metrics.
The F1-score is a useful metric for addressing the issue of imbalanced datasets, which is composed of precision and recall. Precision refers to the ratio of the number of correctly predicted samples to the total number of samples predicted as positive, while recall refers to the ratio of the number of correctly predicted samples to the total number of actual positive samples. The best value of F1-score is 1, while the worst value is 0. Their specific formulas are
(3.2)
(3.3)
(3.4)
where TP, TN, FP and FN are true positive, true negative, false positive and false negative, respectively.
In order to comprehensively evaluate the F1-scores of multiple classes, we employed the macro average of F1-score, which aggregates the F1-score for different classes by taking their average with equal weights assigned to all classes.
MCC is widely acknowledged as a superior performance metric for the classification of imbalanced data. It is defined within the range of , where a value of 1 indicates that the classifier accurately predicts all positive instances and negative instances, while a value of signifies that the classifier incorrectly predicts all instances. The specific formula is
(3.5)
The overall MCC for all categories is computed by averaging the MCC values of individual categories.
3.3. The selection of K value when constructing the hypergraph
The selection of K neighbors plays a vital role in the construction process of the hyperedge, as it has a significant impact on the model's performance. The selection of K value is performed by training the model with different K values and evaluating its performance. The optimal K value is determined based on the performance metric obtained from the validation set. We performed K value experiments on four datasets using DRHGNN. The performance metric is macro average of the F1-score. As observed from Table 4, each dataset achieves the best experimental result at different K values, specifically , 10, 12 and 2 respectively. Therefore, when conducting experiments on the four datasets, we selected K values in sequence as 8, 10, 12 and 2.
Table 4.
The performance of DRHGNN with different K values on four datasets. The best result for each dataset is bolded.
3.4. Performance comparison of DRHGNN and HGNN with different layers
The performance of DRHGNN against HGNN with different layers on four datasets is reported in Table 5. Columns 4–9 show the ACC, macro average of the F1-score between DRHGNN and HGNN with different layers on four datasets. For better comparison, we presented the results in Figure 5. By analyzing Table 5 and Figure 5, we can observe two points: 1) DRHGNN achieves much better performance than HGNN on four datasets with accuracy gains of 3.738, 3.903, 4.106, 1.028%, respectively, and with a macro average of F1-score gains of 15.306, 11.843, 13.887, 3.591%, respectively, with their optimal layer. 2) The residual enhanced model (DRHGNN) has stable performance, while the performance of HGNN deteriorates as the number of layers increases. The potential reason for HGNN's performance degradation with increasing layer depth is that the model may suffer from an over-smoothing issue. The performance of DRHGNN persistently improves and achieves the best accuracy on four datasets at layer 4, 8, 4 and 8, respectively, and the best macro average of the F1-score on four datasets at layer 4, 8, 4 and 16, respectively.
Table 5.
Comparison of the ACC, macro average of F1-score between DRHGNN and HGNN with different depths on four datasets. The best result of methods for each dataset is bolded.
Figure 5.
The performance comparison of DRHGNN and HGNN with different layers on membrane protein classification task. (a) The performance comparison of DRHGNN and HGNN on Dataset 1; (b) The performance comparison of DRHGNN and HGNN on Dataset 2; (c) The performance comparison of DRHGNN and HGNN on Dataset 3; (d) The performance comparison of DRHGNN and HGNN on Dataset 4.
3.5. Performance comparison with multiple recently developed advanced methods
The summaries of classification accuracy results of DRHGNN with multiple recently developed advanced methods are shown in Tables 6–Table 9. Tables 6–Table 8 present a comparison of the accuracy of each type of membrane protein and the overall accuracy across all membrane proteins for Dataset 1, Dataset 2, and Dataset 3 using different methods. As Tables 6–Table 8 show, the accuracy of each type of membrane protein obtained using our method is generally higher than those achieved by other methods, and the overall accuracy is also superior to that of other methods. More specifically, compared with the MemType-2L [7] and hypergraph neural network [34] on Dataset 1, DRHGNN achieves overall accuracy gains of 2.63 and 3.738%, respectively. Compared with the MemType-2L [7] and hypergraph neural network [34] on Dataset 2, DRHGNN achieves overall accuracy gains of 5.507 and 3.903%, respectively. Compared with the MemType-2L [7] and hypergraph neural network [34] on Dataset 3, DRHGNN achieves overall accuracy gains of 11.128 and 4.106%, respectively. Furthermore, within these three datasets, the fifth type of membrane protein exhibits the highest accuracy compared to other types. This can potentially be attributed to the significantly larger number of samples available for the fifth type of membrane protein in these datasets. Table 9 presents a comparison of the overall accuracy between our proposed method and other methods on Dataset 4. As Table 9 shows, our method achieved the best performance among all the compared methods. More specifically, compared with CPSR [10] and two-stage SVM [14] on Dataset 4, DRHGNN achieves overall accuracy gains of 3.314 and 1.814%, respectively. Those results demonstrate the superior performance of DRHGNN on the membrane protein classification task. The detailed performance of DRHGNN on four datasets is shown in Table 10.
Table 6.
Comparison of the ACC between DRHGNN and multiple recent state of the art methods on Dataset 1. The best result among methods is bolded.
To further analyze the stability of DRHGNN compared to HGNN, we conducted an analysis by adjusting the training rate. All experiments were carried out with five different training rates followed by five distinct seeds. We then recorded the best results using the optimal number of layers in each experiment. Table 11 and Figure 6 show that DRHGNN consistently performs better than HGNN across all training rates, with around 1.5 to 5% overall accuracy enhancements and around 3.591 to 15.306% macro average of F1-score enhancements. This demonstrates the stability of DRHGNN performing better than HGNN with different ratios. In the meanwhile, DRHGNN shows stability, especially in small training rates and shows its better performance around original training rate.
Table 11.
Summaries of the ACC, macro average of F1-score of DRHGNN and HGNN with different training ratios.
Figure 6.
Stability analysis. The performance of DRHGNN and HGNN with different training ratios on membrane protein classification task. (a) The performance on Dataset 1; (b) The performance on Dataset 2; (c) The performance on Dataset 3; (d) The performance on Dataset 4.
We conducted an ablation study on initial residual and identity mapping. In Table 12, columns 4-9 show the accuracy and the macro average of the F1-score of four methods with different depths of the network layers on the four datasets. As Table 12 and Figure 7 show, HGNN using identify mapping can mitigate the problem of over-smoothing a little, and HGNN using initial residual can reduce the over-smoothing problem greatly. Meanwhile, adopting initial residual and identity mapping together can significantly improve performance while effectively reducing the over-smoothing problem. Furthermore, we found that the experimental results of HGNN adopting initial residual and identity mapping together and HGNN using initial residual are very close. However, HGNN adopting both outperforms in terms of accuracy and the macro average of the F1-score and reaches the best result faster than just adopting the initial residual.
Table 12.
Ablation study on initial residual and identity mapping. The best result of methods for each dataset is bolded.
Figure 7.
Ablation study on initial residual and identity mapping. The performance comparison of DRHGNN, HGNN, HGNN with initial residual, HGNN with identity mapping with different layers on membrane protein classification task. (a) The performance comparison on Dataset 1; (b) The performance comparison on Dataset 2; (c) The performance comparison on Dataset 3; (d) The performance comparison on Dataset 4.
This study proposed a DRHGNN enhanced with initial residual and identity mapping based on HGNN to further learn the representations of membrane proteins for identifying the types of membrane proteins.
First, the extracted features generated with five methods represented each membrane protein. Second, each incidence matrix was constructed according to each modality membrane protein representation. Lastly, those multi-modals of membrane protein features and corresponding were concatenated, respectively, and both were fed into the DRHGNN for the membrane protein classification task.
In those extensive experiments on membrane protein classification task, our method achieved a much better performance on four datasets.
DRHGNN resolves the following issues: The high-order correlation among membrane proteins and the scenarios of multi-modal representations of membrane proteins. In the meantime, DRHGNN can handle the over-smoothing issue as the number of model layers increases compared with HGNN.
However, we found three areas for improvement while doing experiments. One is that DRHGNN is quite sensitive to different datasets. Specifically, the performance of Dataset 4 is better than the performance of other datasets. The overall quantity of Dataset 4, the partitioning of training and testing sets on Dataset 4 and the distribution of a certain membrane protein class on Dataset 4 differ from the other datasets, which may significantly influence the training process and generalization capabilities of the model. Another is that we ignored the modification of hyperedge following with adjusted feature embedding in different layers. The model is still worth enhancing. The third one is that the hyperedges were constructed based on feature similarity, which may not directly represent physical interactions between the membrane proteins. Our approach should be considered as an approximation rather than a direct representation of interactions.
The main challenge for future research is to resolve three issues: DRHGNN's sensitivity to different datasets, modification of hyperedge following with adjusted feature embedding in different layers and capturing physical interactions among membrane proteins accurately.
In the meantime, the progress in interaction prediction research across diverse fields of computational biology holds great promise for gaining valuable insights into genetic markers and ncRNAs associated with membrane protein types, such as the prediction of miRNA-IncRNA interactions using a method based on the graph convolutional neural (GCN) network and the conditional random field (CRF) [35], gene function and protein association (GFPA) that extracts reliable associations between gene function and cell surface proteins from single-cell multimodal data [36], prediction of lncRNA-miRNA association using a network distance analysis model [37], prediction of the potential associations of disease-related metabolites using GCN with graph attention network [38], predicting Human ether-a-go-go-related gene (hERG) blockers using molecular fingerprints and graph attention mechanism [39] and predicting the potential associations between metabolites and diseases based on autoencoder and nonnegative matrix factorization [40]. These will also be our future research direction.
Use of AI tools declaration
The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.
Acknowledgments
This paper is supported by the National Natural Science Foundation of China (62372318, 61902272, 62073231, 62176175, 61876217, 61902271), the National Research Project (2020YFC2006602), the Provincial Key Laboratory for Computer Information Processing Technology, Soochow University (KJS2166) and the Opening Topic Fund of Big Data Intelligent Engineering Laboratory of Jiangsu Province (SDGC2157).
Conflict of interest
All authors declare no conflicts of interest in this paper.
N. Skiter, A. F. Rogachev, T. I. Mazaeva, Modeling ecological security of a state, Mediter. J. Soc. Sci., 6 (2015), 185–192. https://doi.org/10.5901/mjss.2015.v6n3s6p185 doi: 10.5901/mjss.2015.v6n3s6p185
[2]
R. J. Nicholls, A. Cazenave, Sea-level rise and its impact on coastal zones, Science, 328 (2010), 1517–1520. https://doi.org/10.1126/science.1185782 doi: 10.1126/science.1185782
[3]
L. Javaid, Climate change: Melting glaciers bring energy uncertainty, Nature, 502 (2013), 617–618. https://doi.org/10.1038/502617a doi: 10.1038/502617a
[4]
K. E. Trenberth, A. Dai, G. V. Schrier, P. D. Jones, J. Barichivich, K. R. Briffa, et al., Global warming and changes in drought, Nat. Clim. Change, 4 (2014), 17–22. https://doi.org/10.1038/NCLIMATE2067 doi: 10.1038/NCLIMATE2067
[5]
A. Dai, Drought under global warming: a review, Wiley Interdiscip. Rev. Clim. Change, 2 (2011), 45–65. https://doi.org/10.1002/wcc.81 doi: 10.1002/wcc.81
[6]
A. Dai, Increasing drought under global warming in observations and models, Nat. Clim. Change, 3 (2013), 52–58. https://doi.org/10.1038/NCLIMATE1633 doi: 10.1038/NCLIMATE1633
[7]
S. Piao, X. Wang, T. Park, C. Chen, X. Lian, Y. He, et al., Characteristics, drivers and feedbacks of global greening, Nat. Rev. Earth. Env., 1 (2020), 14–27. https://doi.org/10.1038/s43017-019-0001-x doi: 10.1038/s43017-019-0001-x
[8]
R. M. Deans, T. J. Brodribb, F. A. Busch, G. D. Farquhar., Optimization can provide the fundamental link between leaf photosynthesis, gas exchange and water relations, Nat. Plants, 6 (2020), 1116–1125. https://doi.org/10.1038/s41477-020-00760-6 doi: 10.1038/s41477-020-00760-6
[9]
D. A. Way, Just the right temperature, Nat. Ecol. Evol., 3 (2019), 718–719. https://doi.org/10.1038/s41559-019-0877-3 doi: 10.1038/s41559-019-0877-3
[10]
S. Levis, J. A. Foley, D. Pollard, Large-scale vegetation feedbacks on a doubled CO2 climate, J. Clim., 13 (2000), 1313–1325. https://doi.org/10.1175/1520-0442 doi: 10.1175/1520-0442
[11]
T. G. Yun, J. Bae, A. Rothschild, I. Kim, Transpiration driven electrokinetic power generator, ACS Nano, 13 (2019), 12703–12709. https://doi.org/10.1021/acsnano.9b04375 doi: 10.1021/acsnano.9b04375
[12]
S. Jasechko, Z. D. Sharp, J. J. Gibson, S. J. Birks, Y. Yi, P. J. Fawcett, Terrestrial water fluxes dominated by transpiration, Nature, 496 (2013), 347–350. https://doi.org/10.1038/nature11983 doi: 10.1038/nature11983
[13]
O. K. Atkin, D. Bruhn, V. Hurry, M. G. Tjoelker, The hot and the cold: unravelling the variable response of plant respiration to temperature, Funct. Plant Biol., 32 (2005), 87–105. https://doi.org/10.1071/FP03176 doi: 10.1071/FP03176
[14]
R. Nemani, C. Keeling, H. Hashimoto, W. Jolly, S. Piper, C. Tucker, et al., Climate-driven increases in global terrestrial net primary production from 1982 to 1999, Science, 300 (2003), 1560–1563. https://doi.org/10.1126/science.1082750 doi: 10.1126/science.1082750
[15]
L. Zhao, A. Dai, B. Dong, Changes in global vegetation activity and its driving factors during 1982–2013, Agric. For. Meteorol., 249 (2018), 198–209. https://doi.org/10.1016/j.agrformet.2017.11.013 doi: 10.1016/j.agrformet.2017.11.013
[16]
L. Zhou, C. J. Tucker, R. K. Kaufmann, D. Slayback, N. V. Shabanov, R. B. Myneni, Variations in northern vegetation activity inferred from satellite data of vegetation index during 1981 to 1999, J. Geophys. Res. Atmos., 106 (2001), 20069–20083. https://doi.org/10.1029/2000JD000115 doi: 10.1029/2000JD000115
[17]
Y. Shi, N. Jin, X. Ma, B. Wu, Q. He, C. Yue et al., Attribution of climate and human activities to vegetation change in China using machine learning techniques, Agr. For. Meteorol., 294 (2020), 108146. https://doi.org/10.1016/j.agrformet.2020.108146 doi: 10.1016/j.agrformet.2020.108146
[18]
A. Kawabata, K. Ichii, Y. Yamaguchi, Global monitoring of interannual changes in vegetation activities using NDVI and its relationships to temperature and precipitation, Int. J. Remote Sens., 22 (2001), 1377–1382. https://doi.org/10.1080/01431160119381 doi: 10.1080/01431160119381
[19]
Y. Zheng, J. Han, Y. Huang, S. R. Fassnacht, S. Xie, E. Lv, et al., Vegetation response to climate conditions based on NDVI simulations using stepwise cluster analysis for the three-river headwaters region of China, Ecol. Indic., 92 (2018), 18–29. https://doi.org/10.1016/j.ecolind.2017.06.040 doi: 10.1016/j.ecolind.2017.06.040
[20]
H. E. Beck, N. E. Zimmermann, T. R. McVicar, N. Vergopolan, A. Berg, E. F. Wood, Present and future koppen-geiger climate classification maps at 1-km resolution, Sci. Data, 5 (2018). https://doi.org/10.1038/sdata.2018.214
[21]
X. Wang, S. Piao, P. Ciais, J. Li, P. Friedlingstein, C. D. Koven, et al., Spring temperature change and its implication in the change of vegetation growth in North America from 1982 to 2006, Proc. Natl. Acad. Sci. U. S. A., 108 (2011), 1240–1245. https://doi.org/10.1073/pnas.1014425108 doi: 10.1073/pnas.1014425108
[22]
M. Li, J. Du, W. Li, R. Li, S. Wu, S. Wang, Global vegetation change and its relationship with precipitation and temperature based on glass-LAI in 1982-2015, Sci. Geogr. Sin., 40 (2020), 823–832. https://doi.org/10.13249/j.cnki.sgs.2020.05.017 doi: 10.13249/j.cnki.sgs.2020.05.017
[23]
R. Fensholt, T. Langanke, K. Rasmussen, A. Reenberg, S. D. Prince, C. Tucker, et al., Greenness in semi-arid areas across the globe 1981–2007–-an Earth Observing Satellite based analysis of trends and drivers, Remote Sens. Environ., 121 (2012), 144–158. https://doi.org/10.1016/j.rse.2012.01.017 doi: 10.1016/j.rse.2012.01.017
[24]
X. Chuai, X. Huang, W. Wang, G. Bao, NDVI, temperature and precipitation changes and their relationships with different vegetation types during 1998–2007 in Inner Mongolia, China, Int. J. Climatol., 33 (2013), 1696–1706. https://doi.org/10.1002/joc.3543 doi: 10.1002/joc.3543
[25]
J. L. Weiss, D. S. Gutzler, J. E. A. Coonrod, C.N. Dahm, Seasonal and inter-annual relationships between vegetation and climate in central New Mexico, USA, J. Arid Environ., 57 (2004), 507–534. https://doi.org/10.1016/S0140-1963(03)00113-7 doi: 10.1016/S0140-1963(03)00113-7
[26]
T. Hickler, L. Eklundh, J. W. Seaquist, B. Smith, J. Ard $, L. Olsson, et al., Precipitation controls Sahel greening trend, Geophys. Res. Lett., 32. https://doi.org/10.1029/2005GL024370
[27]
X. Zhao, K. Tan, S. Zhao, J. Feng, Changing climate affects vegetation growth in the arid region of the northwestern China, J. Arid Environ., 75 (2011), 946–952. https://doi.org/10.1016/j.jaridenv.2011.05.007 doi: 10.1016/j.jaridenv.2011.05.007
[28]
H. Bai, Z. Gong, G. Q. Sun, L. Li, L. Zhou, Influence of meteorological elements on summer vegetation coverage in North China, Chin. J. Atmos. Sci., 46 (2022), 1–13. https://doi.org/10.3878/j.issn.1006-9895.2102.20233 doi: 10.3878/j.issn.1006-9895.2102.20233
[29]
H. Bai, Z. Gong, G. Q. Sun, L. Li, Data-driven artificial intelligence model of meteorological elements influence on vegetation coverage in North China, Remote Sens., 14 (2022), 1307. https://doi.org/10.3390/rs14061307 doi: 10.3390/rs14061307
[30]
M. Hulme, Recent climatic change in the world's drylands, Geophys. Res. Lett., 23 (1996), 61–64. https://doi.org/10.1029/95GL03586 doi: 10.1029/95GL03586
[31]
M. Ji, J. Huang, Y. Xie, J. Liu, Comparison of dryland climate change in observations and CMIP5 simulations, Adv. Atmos. Sci., 32 (2015), 1565–1574. https://doi.org/10.1007/s00376-015-4267-8 doi: 10.1007/s00376-015-4267-8
[32]
S. Feng, Q. Fu, Expansion of global drylands under a warming climate, Atmos. Chem. Phys., 13 (2013), 10081–10094. https://doi.org/10.5194/acp-13-10081-2013 doi: 10.5194/acp-13-10081-2013
[33]
J. Huang, Y. Li, C. Fu, F. Chen, Q. Fu, A. Dai, et al., Dryland climate change: Recent progress and challenges, Rev. Geophys., 55 (2017), 719–778. https://doi.org/10.1002/2016RG000550 doi: 10.1002/2016RG000550
[34]
F. T. Maestre, C. Escolar, M. L. de Guevara, J. L. Quero, R. Lázaro, M. Delgado‐Baquerizo, et al., Changes in biocrust cover drive carbon cycle responses to climate change in drylands, Global Change Biol., 19 (2013), 3835–3847. https://doi.org/10.1111/gcb.12306 doi: 10.1111/gcb.12306
[35]
R. P. Motha, W. Baier, Impacts of present and future climate change and climate variability on agriculture in the temperate regions: North America, Clim. Change, 70 (2005), 137–164. https://doi.org/10.1007/s10584-005-5940-1 doi: 10.1007/s10584-005-5940-1
[36]
M. Rietkerk, S. C. Dekker, P. C. de Ruiter, J. van de Koppel, Self-organized patchiness and catastrophic shifts in ecosystems, Science, 305 (2004), 1926–1929. https://doi.org/10.1126/science.1101867 doi: 10.1126/science.1101867
[37]
C. Ryan, P. Elsner, The potential for sand dams to increase the adaptive capacity of East African drylands to climate change, Reg. Environ. Change, 16 (2016), 2087–2096. https://doi.org/10.1007/s10113-016-0938-y doi: 10.1007/s10113-016-0938-y
[38]
G. Lischeid, A decision support system for mountain basin management using sparse data, Geophys. Res. Abstr., 8 (2006), 04223. https://doi.org/10.1007/s11269-008-9339-4
[39]
R. Wang, Y. Ma, Coupling relation among substance and energy as well as information in mountain-basin system in arid zone, J. Mt. Sci. Engl., 19 (2001), 5.
[40]
Z. Wu, H. Zhang, C. M. Krause, N. S. Cobb, Climate change and human activities: a case study in Xinjiang, China, Clim. Change, 99 (2010), 457–472. https://doi.org/10.1007/s10584-009-9760-6 doi: 10.1007/s10584-009-9760-6
[41]
Z. Gong, S. Zhao, J. Gu, Correlation analysis between vegetation coverage and climate drought conditions in North China during 2001–2013, J. Geogr. Sci., 27 (2016), 143–160. https://doi.org/0.1007/s11442-017-1369-5
[42]
C. Chen, B. He, L. Guo, Y. Zhang, X. Xie, Z. Chen, Identifying critical climate periods for vegetation growth in the Northern Hemisphere, J. Geophys. Res.-Biogeo., 123 (2018), 2541–2552. https://doi.org/10.1029/2018JG004443 doi: 10.1029/2018JG004443
[43]
Z. Zhou, Ensemble methods: Foundations and algorithms, 2012.
Q. Zhuang, S. Wu, X. Y. Feng, Y. Niu, Analysis and prediction of vegetation dynamics under the background of climate change in Xinjiang, China, PeerJ, 8 (2020), e8282. https://doi.org/10.7717/peerj.8282 doi: 10.7717/peerj.8282
[46]
Z. Zhu, S. Piao, R. B. Myneni, M. Huang, Z. Zeng, J. G. Canadell, et al., Greening of the earth and its drivers, Nat. Clim. Change, 6 (2016), 791–795. https://doi.org/10.1038/nclimate3004 doi: 10.1038/nclimate3004
[47]
J. Li, G. Q. Sun, Z. Jin, Interactions of time delay and spatial diffusion induce the periodic oscillation of the vegetation system, Discrete Cont. Dyn.-B., 2147–2172. https://doi.org/10.3934/dcdsb.2021127
[48]
J. Li, G. Q. Sun, Z. G. Guo, Bifurcation analysis of an extended klausmeier–gray–scott model with infiltration delay, Stud. Appl. Math., 148 (2022), 1519–1542. https://doi.org/10.1111/sapm.12482 doi: 10.1111/sapm.12482
[49]
G. Q. Sun, H. T. Zhang, Y. L. Song, L. Li, Z. Jin, Dynamic analysis of a plant-water model with spatial diffusion, J. Differ. Equations, 329 (2022), 395–430. https://doi.org/10.1016/j.jde.2022.05.009 doi: 10.1016/j.jde.2022.05.009
Huimin Bai, Li Li, Yongping Wu, Chen Liu, Zhiqiang Gong, Guolin Feng, Gui-Quan Sun. Study on the influence of meteorological elements on growing season vegetation coverage in Xinjiang, China[J]. Electronic Research Archive, 2022, 30(9): 3463-3480. doi: 10.3934/era.2022177
Huimin Bai, Li Li, Yongping Wu, Chen Liu, Zhiqiang Gong, Guolin Feng, Gui-Quan Sun. Study on the influence of meteorological elements on growing season vegetation coverage in Xinjiang, China[J]. Electronic Research Archive, 2022, 30(9): 3463-3480. doi: 10.3934/era.2022177
Table 5.
Comparison of the ACC, macro average of F1-score between DRHGNN and HGNN with different depths on four datasets. The best result of methods for each dataset is bolded.
Figure 2. Location of the study area and land use in Xinjiang. (a) 2000, (b): 2010, (c) 2020
Figure 3. Monthly mean vegetation coverage during 2000 to 2018 in Xinjiang
Figure 4. Time series characteristics of vegetation and meteorological elements in Xinjiang during 2000 to 2018: (a1)–(e1) is the change trend, (a2)–(e2) is anomaly value after detrend, (a3)–(e3) is the cumulative anomaly value
Figure 5. Average value of climate during 2000 to 2018 in Xinjiang : (a) temperature (unit: C), (b) precipitation (unit: mm), (c) relative humidity (unit:%), (d) sunshine hours (unit: h)
Figure 6. Spatial distribution characteristics of vegetation coverage: (a) growing season mean vegetation coverage from 2000 to 2018 (unit: dimensionless), (b) change trend of growing season mean vegetation coverage from 2000 to 2018, (c) change trend of growing season mean vegetation coverage from 2000 to 2009, and (d) change trend of growing season mean vegetation coverage from 2010 to 2018
Figure 7. Correlation coefficient (PCC) between vegetation coverage () and meteorological elements (, , and ), where means passing the significance test of 90%
Figure 8. Observed (OBS) value of vegetation coverage and simulated value of random forest (RF) model in 2017 and 2018
Figure 9. The most important climatic factor (annual average temperature (), annual cumulative precipitation (), annual average relative humidity () and annual average sunshine hours ()) for predicting growing season vegetation coverage